Test Report: Docker_Linux_crio 21767

                    
                      05a109d80d7e573d35c6ebc91a1126cc576c7968:2025-10-18:41956
                    
                

Test fail (37/327)

Order failed test Duration
29 TestAddons/serial/Volcano 0.24
35 TestAddons/parallel/Registry 13.29
36 TestAddons/parallel/RegistryCreds 0.4
37 TestAddons/parallel/Ingress 146.52
38 TestAddons/parallel/InspektorGadget 6.25
39 TestAddons/parallel/MetricsServer 5.31
41 TestAddons/parallel/CSI 57.32
42 TestAddons/parallel/Headlamp 2.55
43 TestAddons/parallel/CloudSpanner 5.25
44 TestAddons/parallel/LocalPath 10.11
45 TestAddons/parallel/NvidiaDevicePlugin 5.25
46 TestAddons/parallel/Yakd 6.24
47 TestAddons/parallel/AmdGpuDevicePlugin 6.28
98 TestFunctional/parallel/ServiceCmdConnect 602.88
115 TestFunctional/parallel/ServiceCmd/DeployApp 600.63
124 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.97
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.99
132 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.39
134 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.3
136 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.2
137 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.35
153 TestFunctional/parallel/ServiceCmd/HTTPS 0.52
154 TestFunctional/parallel/ServiceCmd/Format 0.53
155 TestFunctional/parallel/ServiceCmd/URL 0.53
191 TestJSONOutput/pause/Command 1.62
197 TestJSONOutput/unpause/Command 1.96
291 TestPause/serial/Pause 5.32
340 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.42
348 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.3
358 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.19
362 TestStartStop/group/old-k8s-version/serial/Pause 6.36
366 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.6
373 TestStartStop/group/no-preload/serial/Pause 7.44
377 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.49
385 TestStartStop/group/newest-cni/serial/Pause 6.06
388 TestStartStop/group/embed-certs/serial/Pause 6.13
392 TestStartStop/group/default-k8s-diff-port/serial/Pause 5.51
x
+
TestAddons/serial/Volcano (0.24s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-757656 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-757656 addons disable volcano --alsologtostderr -v=1: exit status 11 (238.091121ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 08:31:49.072799   18929 out.go:360] Setting OutFile to fd 1 ...
	I1018 08:31:49.073107   18929 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:31:49.073118   18929 out.go:374] Setting ErrFile to fd 2...
	I1018 08:31:49.073124   18929 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:31:49.073388   18929 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-5897/.minikube/bin
	I1018 08:31:49.073669   18929 mustload.go:65] Loading cluster: addons-757656
	I1018 08:31:49.074006   18929 config.go:182] Loaded profile config "addons-757656": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:31:49.074021   18929 addons.go:606] checking whether the cluster is paused
	I1018 08:31:49.074121   18929 config.go:182] Loaded profile config "addons-757656": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:31:49.074137   18929 host.go:66] Checking if "addons-757656" exists ...
	I1018 08:31:49.074577   18929 cli_runner.go:164] Run: docker container inspect addons-757656 --format={{.State.Status}}
	I1018 08:31:49.093146   18929 ssh_runner.go:195] Run: systemctl --version
	I1018 08:31:49.093213   18929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:31:49.111114   18929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/addons-757656/id_rsa Username:docker}
	I1018 08:31:49.205881   18929 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 08:31:49.205970   18929 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 08:31:49.234943   18929 cri.go:89] found id: "4790d2a16058f3034a1b1e4ae855894aab7d7f3d7c610e86af4396f6d3498080"
	I1018 08:31:49.234964   18929 cri.go:89] found id: "48cc0cbf614ba386328b79cd306ce1fd90f8b4c338b8eb054421e5183efc5d4e"
	I1018 08:31:49.234968   18929 cri.go:89] found id: "c8379247a51b8242f6cb2cd6503d43ea0ed66dd9900fc8728b601695286a1d0a"
	I1018 08:31:49.234971   18929 cri.go:89] found id: "941a165621387aecf9f61fa5f0858b119aa2452338edbbe4ffbe1cff9b72292f"
	I1018 08:31:49.234973   18929 cri.go:89] found id: "14afe1f1816da18c2ce04153131d2aa122c50659a7faa8f9e40544d725a3d2c7"
	I1018 08:31:49.234978   18929 cri.go:89] found id: "440454c3da4c0302b146f97ed6c6f0e44df0c561a5f8d848e7e81218f08ef6db"
	I1018 08:31:49.234980   18929 cri.go:89] found id: "bbb9cb4aa33f4e4e42c90d9f8b44b4c8f4c50b6a89edc5b1893c52e37b664fed"
	I1018 08:31:49.234982   18929 cri.go:89] found id: "0ad5891c05dff52f7d29bd1edd32ab0a01ccc280a8974a244ec73419bd21a831"
	I1018 08:31:49.234985   18929 cri.go:89] found id: "f6e03a69b7bc41d32daae0fea75627f3e6bab34641aba500b0deec44241fa209"
	I1018 08:31:49.234994   18929 cri.go:89] found id: "4171876174cfa4f01c139bc1155ba660392b57736128ebf7bc1dca331bbcaee4"
	I1018 08:31:49.234997   18929 cri.go:89] found id: "813e46f6ecd6f3f0ed03b73a97ae5413d8bb65920271777404da143c0e902755"
	I1018 08:31:49.234999   18929 cri.go:89] found id: "ab63780aacfa0fb8341e0e937c7631e7eaf6c63690759abd7a5b64b2e83e3368"
	I1018 08:31:49.235001   18929 cri.go:89] found id: "72deecf66bdbeb5deb3d6951223a20fdac95c3fa8a32985ef6454a42357402e1"
	I1018 08:31:49.235004   18929 cri.go:89] found id: "cd1b8704f38dd764c4b72252086f2f53b2d98dc57ff0adbe8d204888170e994c"
	I1018 08:31:49.235006   18929 cri.go:89] found id: "c08a0e9528c61fd1b79ac94a2160e7ee63bc34d3e75541f78b2b3b9028daa8e1"
	I1018 08:31:49.235010   18929 cri.go:89] found id: "c216c132bff884542e58d26afd8de4c6ca39c00b23e91a35690a66be1be95c45"
	I1018 08:31:49.235012   18929 cri.go:89] found id: "dbf5c6e8579fb377ddb314d3b43db1406eed33b898a34570445f3dbda1c63266"
	I1018 08:31:49.235017   18929 cri.go:89] found id: "7189be801872dca3adedf930d1116a7930ab711e9e199fc588f8ad5ec67c23c9"
	I1018 08:31:49.235019   18929 cri.go:89] found id: "6c971e87dacce692e9ba51b9df623358656653ca81349c910a48ee4deca9701c"
	I1018 08:31:49.235022   18929 cri.go:89] found id: "1511480aef50d5e66eab7e6f72a8a21ffb3a3ad656dc0ce5a729ee3afe26e9c7"
	I1018 08:31:49.235024   18929 cri.go:89] found id: "52adc977887b4b184292fed9d6952cb67b4fb667289dd8df966abb85c03aaa46"
	I1018 08:31:49.235027   18929 cri.go:89] found id: "4faa6d23dba1bdc8c7eba89649f47072c5b426937bf2a2b10aa4b52d39f44cf8"
	I1018 08:31:49.235029   18929 cri.go:89] found id: "717994737c9e9e736b5e73abe6513db6ce8ecf19404100a264fa9c13ee71f047"
	I1018 08:31:49.235032   18929 cri.go:89] found id: "56d69d63fccc147fc338479d722142a993f3013be2c188974a95d01e019bcb14"
	I1018 08:31:49.235036   18929 cri.go:89] found id: ""
	I1018 08:31:49.235079   18929 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 08:31:49.249142   18929 out.go:203] 
	W1018 08:31:49.250556   18929 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:31:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:31:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 08:31:49.250575   18929 out.go:285] * 
	* 
	W1018 08:31:49.253654   18929 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 08:31:49.254957   18929 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-757656 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.24s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.29s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.107385ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-lbbgc" [f8fb4269-2c69-4e40-ac5a-1a96111c3b97] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002509319s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-7g848" [66b8f065-f87f-49bb-a6ff-d7474f5b093b] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003216246s
addons_test.go:392: (dbg) Run:  kubectl --context addons-757656 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-757656 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-757656 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.789437798s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-757656 ip
2025/10/18 08:32:10 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-757656 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-757656 addons disable registry --alsologtostderr -v=1: exit status 11 (236.462223ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 08:32:10.165754   20782 out.go:360] Setting OutFile to fd 1 ...
	I1018 08:32:10.166007   20782 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:32:10.166017   20782 out.go:374] Setting ErrFile to fd 2...
	I1018 08:32:10.166022   20782 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:32:10.166220   20782 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-5897/.minikube/bin
	I1018 08:32:10.166510   20782 mustload.go:65] Loading cluster: addons-757656
	I1018 08:32:10.166897   20782 config.go:182] Loaded profile config "addons-757656": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:32:10.166914   20782 addons.go:606] checking whether the cluster is paused
	I1018 08:32:10.166996   20782 config.go:182] Loaded profile config "addons-757656": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:32:10.167008   20782 host.go:66] Checking if "addons-757656" exists ...
	I1018 08:32:10.167374   20782 cli_runner.go:164] Run: docker container inspect addons-757656 --format={{.State.Status}}
	I1018 08:32:10.188220   20782 ssh_runner.go:195] Run: systemctl --version
	I1018 08:32:10.188282   20782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:32:10.209067   20782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/addons-757656/id_rsa Username:docker}
	I1018 08:32:10.304136   20782 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 08:32:10.304219   20782 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 08:32:10.333574   20782 cri.go:89] found id: "4790d2a16058f3034a1b1e4ae855894aab7d7f3d7c610e86af4396f6d3498080"
	I1018 08:32:10.333598   20782 cri.go:89] found id: "48cc0cbf614ba386328b79cd306ce1fd90f8b4c338b8eb054421e5183efc5d4e"
	I1018 08:32:10.333604   20782 cri.go:89] found id: "c8379247a51b8242f6cb2cd6503d43ea0ed66dd9900fc8728b601695286a1d0a"
	I1018 08:32:10.333609   20782 cri.go:89] found id: "941a165621387aecf9f61fa5f0858b119aa2452338edbbe4ffbe1cff9b72292f"
	I1018 08:32:10.333622   20782 cri.go:89] found id: "14afe1f1816da18c2ce04153131d2aa122c50659a7faa8f9e40544d725a3d2c7"
	I1018 08:32:10.333626   20782 cri.go:89] found id: "440454c3da4c0302b146f97ed6c6f0e44df0c561a5f8d848e7e81218f08ef6db"
	I1018 08:32:10.333630   20782 cri.go:89] found id: "bbb9cb4aa33f4e4e42c90d9f8b44b4c8f4c50b6a89edc5b1893c52e37b664fed"
	I1018 08:32:10.333634   20782 cri.go:89] found id: "0ad5891c05dff52f7d29bd1edd32ab0a01ccc280a8974a244ec73419bd21a831"
	I1018 08:32:10.333638   20782 cri.go:89] found id: "f6e03a69b7bc41d32daae0fea75627f3e6bab34641aba500b0deec44241fa209"
	I1018 08:32:10.333645   20782 cri.go:89] found id: "4171876174cfa4f01c139bc1155ba660392b57736128ebf7bc1dca331bbcaee4"
	I1018 08:32:10.333649   20782 cri.go:89] found id: "813e46f6ecd6f3f0ed03b73a97ae5413d8bb65920271777404da143c0e902755"
	I1018 08:32:10.333653   20782 cri.go:89] found id: "ab63780aacfa0fb8341e0e937c7631e7eaf6c63690759abd7a5b64b2e83e3368"
	I1018 08:32:10.333657   20782 cri.go:89] found id: "72deecf66bdbeb5deb3d6951223a20fdac95c3fa8a32985ef6454a42357402e1"
	I1018 08:32:10.333665   20782 cri.go:89] found id: "cd1b8704f38dd764c4b72252086f2f53b2d98dc57ff0adbe8d204888170e994c"
	I1018 08:32:10.333669   20782 cri.go:89] found id: "c08a0e9528c61fd1b79ac94a2160e7ee63bc34d3e75541f78b2b3b9028daa8e1"
	I1018 08:32:10.333680   20782 cri.go:89] found id: "c216c132bff884542e58d26afd8de4c6ca39c00b23e91a35690a66be1be95c45"
	I1018 08:32:10.333684   20782 cri.go:89] found id: "dbf5c6e8579fb377ddb314d3b43db1406eed33b898a34570445f3dbda1c63266"
	I1018 08:32:10.333689   20782 cri.go:89] found id: "7189be801872dca3adedf930d1116a7930ab711e9e199fc588f8ad5ec67c23c9"
	I1018 08:32:10.333693   20782 cri.go:89] found id: "6c971e87dacce692e9ba51b9df623358656653ca81349c910a48ee4deca9701c"
	I1018 08:32:10.333698   20782 cri.go:89] found id: "1511480aef50d5e66eab7e6f72a8a21ffb3a3ad656dc0ce5a729ee3afe26e9c7"
	I1018 08:32:10.333703   20782 cri.go:89] found id: "52adc977887b4b184292fed9d6952cb67b4fb667289dd8df966abb85c03aaa46"
	I1018 08:32:10.333709   20782 cri.go:89] found id: "4faa6d23dba1bdc8c7eba89649f47072c5b426937bf2a2b10aa4b52d39f44cf8"
	I1018 08:32:10.333714   20782 cri.go:89] found id: "717994737c9e9e736b5e73abe6513db6ce8ecf19404100a264fa9c13ee71f047"
	I1018 08:32:10.333721   20782 cri.go:89] found id: "56d69d63fccc147fc338479d722142a993f3013be2c188974a95d01e019bcb14"
	I1018 08:32:10.333726   20782 cri.go:89] found id: ""
	I1018 08:32:10.333773   20782 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 08:32:10.347919   20782 out.go:203] 
	W1018 08:32:10.349438   20782 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:32:10Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:32:10Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 08:32:10.349465   20782 out.go:285] * 
	* 
	W1018 08:32:10.352473   20782 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 08:32:10.353750   20782 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-757656 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (13.29s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.4s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.738309ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-757656
addons_test.go:332: (dbg) Run:  kubectl --context addons-757656 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-757656 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-757656 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (235.174867ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 08:32:18.978662   22426 out.go:360] Setting OutFile to fd 1 ...
	I1018 08:32:18.979011   22426 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:32:18.979028   22426 out.go:374] Setting ErrFile to fd 2...
	I1018 08:32:18.979035   22426 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:32:18.979390   22426 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-5897/.minikube/bin
	I1018 08:32:18.979750   22426 mustload.go:65] Loading cluster: addons-757656
	I1018 08:32:18.980246   22426 config.go:182] Loaded profile config "addons-757656": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:32:18.980263   22426 addons.go:606] checking whether the cluster is paused
	I1018 08:32:18.980411   22426 config.go:182] Loaded profile config "addons-757656": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:32:18.980427   22426 host.go:66] Checking if "addons-757656" exists ...
	I1018 08:32:18.980918   22426 cli_runner.go:164] Run: docker container inspect addons-757656 --format={{.State.Status}}
	I1018 08:32:19.002798   22426 ssh_runner.go:195] Run: systemctl --version
	I1018 08:32:19.002859   22426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:32:19.021631   22426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/addons-757656/id_rsa Username:docker}
	I1018 08:32:19.117146   22426 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 08:32:19.117234   22426 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 08:32:19.146656   22426 cri.go:89] found id: "4790d2a16058f3034a1b1e4ae855894aab7d7f3d7c610e86af4396f6d3498080"
	I1018 08:32:19.146677   22426 cri.go:89] found id: "48cc0cbf614ba386328b79cd306ce1fd90f8b4c338b8eb054421e5183efc5d4e"
	I1018 08:32:19.146680   22426 cri.go:89] found id: "c8379247a51b8242f6cb2cd6503d43ea0ed66dd9900fc8728b601695286a1d0a"
	I1018 08:32:19.146684   22426 cri.go:89] found id: "941a165621387aecf9f61fa5f0858b119aa2452338edbbe4ffbe1cff9b72292f"
	I1018 08:32:19.146692   22426 cri.go:89] found id: "14afe1f1816da18c2ce04153131d2aa122c50659a7faa8f9e40544d725a3d2c7"
	I1018 08:32:19.146695   22426 cri.go:89] found id: "440454c3da4c0302b146f97ed6c6f0e44df0c561a5f8d848e7e81218f08ef6db"
	I1018 08:32:19.146697   22426 cri.go:89] found id: "bbb9cb4aa33f4e4e42c90d9f8b44b4c8f4c50b6a89edc5b1893c52e37b664fed"
	I1018 08:32:19.146700   22426 cri.go:89] found id: "0ad5891c05dff52f7d29bd1edd32ab0a01ccc280a8974a244ec73419bd21a831"
	I1018 08:32:19.146702   22426 cri.go:89] found id: "f6e03a69b7bc41d32daae0fea75627f3e6bab34641aba500b0deec44241fa209"
	I1018 08:32:19.146707   22426 cri.go:89] found id: "4171876174cfa4f01c139bc1155ba660392b57736128ebf7bc1dca331bbcaee4"
	I1018 08:32:19.146710   22426 cri.go:89] found id: "813e46f6ecd6f3f0ed03b73a97ae5413d8bb65920271777404da143c0e902755"
	I1018 08:32:19.146712   22426 cri.go:89] found id: "ab63780aacfa0fb8341e0e937c7631e7eaf6c63690759abd7a5b64b2e83e3368"
	I1018 08:32:19.146715   22426 cri.go:89] found id: "72deecf66bdbeb5deb3d6951223a20fdac95c3fa8a32985ef6454a42357402e1"
	I1018 08:32:19.146718   22426 cri.go:89] found id: "cd1b8704f38dd764c4b72252086f2f53b2d98dc57ff0adbe8d204888170e994c"
	I1018 08:32:19.146720   22426 cri.go:89] found id: "c08a0e9528c61fd1b79ac94a2160e7ee63bc34d3e75541f78b2b3b9028daa8e1"
	I1018 08:32:19.146724   22426 cri.go:89] found id: "c216c132bff884542e58d26afd8de4c6ca39c00b23e91a35690a66be1be95c45"
	I1018 08:32:19.146727   22426 cri.go:89] found id: "dbf5c6e8579fb377ddb314d3b43db1406eed33b898a34570445f3dbda1c63266"
	I1018 08:32:19.146730   22426 cri.go:89] found id: "7189be801872dca3adedf930d1116a7930ab711e9e199fc588f8ad5ec67c23c9"
	I1018 08:32:19.146732   22426 cri.go:89] found id: "6c971e87dacce692e9ba51b9df623358656653ca81349c910a48ee4deca9701c"
	I1018 08:32:19.146734   22426 cri.go:89] found id: "1511480aef50d5e66eab7e6f72a8a21ffb3a3ad656dc0ce5a729ee3afe26e9c7"
	I1018 08:32:19.146737   22426 cri.go:89] found id: "52adc977887b4b184292fed9d6952cb67b4fb667289dd8df966abb85c03aaa46"
	I1018 08:32:19.146739   22426 cri.go:89] found id: "4faa6d23dba1bdc8c7eba89649f47072c5b426937bf2a2b10aa4b52d39f44cf8"
	I1018 08:32:19.146742   22426 cri.go:89] found id: "717994737c9e9e736b5e73abe6513db6ce8ecf19404100a264fa9c13ee71f047"
	I1018 08:32:19.146744   22426 cri.go:89] found id: "56d69d63fccc147fc338479d722142a993f3013be2c188974a95d01e019bcb14"
	I1018 08:32:19.146746   22426 cri.go:89] found id: ""
	I1018 08:32:19.146781   22426 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 08:32:19.160737   22426 out.go:203] 
	W1018 08:32:19.162120   22426 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:32:19Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:32:19Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 08:32:19.162150   22426 out.go:285] * 
	* 
	W1018 08:32:19.165213   22426 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 08:32:19.166498   22426 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-757656 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.40s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (146.52s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-757656 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-757656 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-757656 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [8ead3ddf-42ec-486a-ba64-daf1c77a3047] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [8ead3ddf-42ec-486a-ba64-daf1c77a3047] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003304596s
I1018 08:32:19.810407    9394 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-757656 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-757656 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m13.891864008s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-757656 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-757656 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-757656
helpers_test.go:243: (dbg) docker inspect addons-757656:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "df669ff7ec7e833eb29b74b0e3b95910965d3c06c3a09ea38921298da52bcf45",
	        "Created": "2025-10-18T08:29:48.229528523Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 11387,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T08:29:48.271883206Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/df669ff7ec7e833eb29b74b0e3b95910965d3c06c3a09ea38921298da52bcf45/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/df669ff7ec7e833eb29b74b0e3b95910965d3c06c3a09ea38921298da52bcf45/hostname",
	        "HostsPath": "/var/lib/docker/containers/df669ff7ec7e833eb29b74b0e3b95910965d3c06c3a09ea38921298da52bcf45/hosts",
	        "LogPath": "/var/lib/docker/containers/df669ff7ec7e833eb29b74b0e3b95910965d3c06c3a09ea38921298da52bcf45/df669ff7ec7e833eb29b74b0e3b95910965d3c06c3a09ea38921298da52bcf45-json.log",
	        "Name": "/addons-757656",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-757656:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-757656",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "df669ff7ec7e833eb29b74b0e3b95910965d3c06c3a09ea38921298da52bcf45",
	                "LowerDir": "/var/lib/docker/overlay2/ec7921e7b6e429da7803d599e184f18672b75401cf485407ffe907779d476778-init/diff:/var/lib/docker/overlay2/76f783f469ac4c930bc111d7df4bd2b3a57bdcd762971c7ce0ba7a7b959771a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ec7921e7b6e429da7803d599e184f18672b75401cf485407ffe907779d476778/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ec7921e7b6e429da7803d599e184f18672b75401cf485407ffe907779d476778/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ec7921e7b6e429da7803d599e184f18672b75401cf485407ffe907779d476778/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-757656",
	                "Source": "/var/lib/docker/volumes/addons-757656/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-757656",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-757656",
	                "name.minikube.sigs.k8s.io": "addons-757656",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b47e490d1f8f9ee24203224605c22aaebaa70dd6240f5bf5cda00a52e2183a36",
	            "SandboxKey": "/var/run/docker/netns/b47e490d1f8f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-757656": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:a0:b6:7c:53:e7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e1527547d992144c63156fafd65c37b1dece89a9ba9e6ee31e056182fd935ba2",
	                    "EndpointID": "9f2c5917b57aa4f7c58b5f3d017b52c25c9e69a61ef1707c3c958144638b4934",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-757656",
	                        "df669ff7ec7e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-757656 -n addons-757656
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-757656 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-757656 logs -n 25: (1.195229046s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-658787 --alsologtostderr --binary-mirror http://127.0.0.1:33031 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-658787 │ jenkins │ v1.37.0 │ 18 Oct 25 08:29 UTC │                     │
	│ delete  │ -p binary-mirror-658787                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-658787 │ jenkins │ v1.37.0 │ 18 Oct 25 08:29 UTC │ 18 Oct 25 08:29 UTC │
	│ addons  │ disable dashboard -p addons-757656                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-757656        │ jenkins │ v1.37.0 │ 18 Oct 25 08:29 UTC │                     │
	│ addons  │ enable dashboard -p addons-757656                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-757656        │ jenkins │ v1.37.0 │ 18 Oct 25 08:29 UTC │                     │
	│ start   │ -p addons-757656 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-757656        │ jenkins │ v1.37.0 │ 18 Oct 25 08:29 UTC │ 18 Oct 25 08:31 UTC │
	│ addons  │ addons-757656 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-757656        │ jenkins │ v1.37.0 │ 18 Oct 25 08:31 UTC │                     │
	│ addons  │ addons-757656 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-757656        │ jenkins │ v1.37.0 │ 18 Oct 25 08:31 UTC │                     │
	│ addons  │ enable headlamp -p addons-757656 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-757656        │ jenkins │ v1.37.0 │ 18 Oct 25 08:31 UTC │                     │
	│ addons  │ addons-757656 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-757656        │ jenkins │ v1.37.0 │ 18 Oct 25 08:31 UTC │                     │
	│ addons  │ addons-757656 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-757656        │ jenkins │ v1.37.0 │ 18 Oct 25 08:32 UTC │                     │
	│ addons  │ addons-757656 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-757656        │ jenkins │ v1.37.0 │ 18 Oct 25 08:32 UTC │                     │
	│ addons  │ addons-757656 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-757656        │ jenkins │ v1.37.0 │ 18 Oct 25 08:32 UTC │                     │
	│ addons  │ addons-757656 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-757656        │ jenkins │ v1.37.0 │ 18 Oct 25 08:32 UTC │                     │
	│ ip      │ addons-757656 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-757656        │ jenkins │ v1.37.0 │ 18 Oct 25 08:32 UTC │ 18 Oct 25 08:32 UTC │
	│ addons  │ addons-757656 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-757656        │ jenkins │ v1.37.0 │ 18 Oct 25 08:32 UTC │                     │
	│ addons  │ addons-757656 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-757656        │ jenkins │ v1.37.0 │ 18 Oct 25 08:32 UTC │                     │
	│ ssh     │ addons-757656 ssh cat /opt/local-path-provisioner/pvc-3bb86e8d-f47c-4433-8e20-a46488dd0d44_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-757656        │ jenkins │ v1.37.0 │ 18 Oct 25 08:32 UTC │ 18 Oct 25 08:32 UTC │
	│ addons  │ addons-757656 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-757656        │ jenkins │ v1.37.0 │ 18 Oct 25 08:32 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-757656                                                                                                                                                                                                                                                                                                                                                                                           │ addons-757656        │ jenkins │ v1.37.0 │ 18 Oct 25 08:32 UTC │ 18 Oct 25 08:32 UTC │
	│ addons  │ addons-757656 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-757656        │ jenkins │ v1.37.0 │ 18 Oct 25 08:32 UTC │                     │
	│ ssh     │ addons-757656 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-757656        │ jenkins │ v1.37.0 │ 18 Oct 25 08:32 UTC │                     │
	│ addons  │ addons-757656 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-757656        │ jenkins │ v1.37.0 │ 18 Oct 25 08:32 UTC │                     │
	│ addons  │ addons-757656 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-757656        │ jenkins │ v1.37.0 │ 18 Oct 25 08:32 UTC │                     │
	│ addons  │ addons-757656 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-757656        │ jenkins │ v1.37.0 │ 18 Oct 25 08:32 UTC │                     │
	│ ip      │ addons-757656 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-757656        │ jenkins │ v1.37.0 │ 18 Oct 25 08:34 UTC │ 18 Oct 25 08:34 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 08:29:24
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 08:29:24.093914   10741 out.go:360] Setting OutFile to fd 1 ...
	I1018 08:29:24.094049   10741 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:29:24.094061   10741 out.go:374] Setting ErrFile to fd 2...
	I1018 08:29:24.094068   10741 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:29:24.094259   10741 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-5897/.minikube/bin
	I1018 08:29:24.094808   10741 out.go:368] Setting JSON to false
	I1018 08:29:24.095583   10741 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":712,"bootTime":1760775452,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 08:29:24.095662   10741 start.go:141] virtualization: kvm guest
	I1018 08:29:24.097700   10741 out.go:179] * [addons-757656] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 08:29:24.099157   10741 out.go:179]   - MINIKUBE_LOCATION=21767
	I1018 08:29:24.099160   10741 notify.go:220] Checking for updates...
	I1018 08:29:24.101888   10741 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 08:29:24.103369   10741 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-5897/kubeconfig
	I1018 08:29:24.104735   10741 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-5897/.minikube
	I1018 08:29:24.106062   10741 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 08:29:24.107350   10741 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 08:29:24.108610   10741 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 08:29:24.130454   10741 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 08:29:24.130551   10741 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 08:29:24.187500   10741 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-10-18 08:29:24.177828343 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 08:29:24.187614   10741 docker.go:318] overlay module found
	I1018 08:29:24.189982   10741 out.go:179] * Using the docker driver based on user configuration
	I1018 08:29:24.191310   10741 start.go:305] selected driver: docker
	I1018 08:29:24.191330   10741 start.go:925] validating driver "docker" against <nil>
	I1018 08:29:24.191371   10741 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 08:29:24.191925   10741 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 08:29:24.245550   10741 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-10-18 08:29:24.23642069 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 08:29:24.245698   10741 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 08:29:24.245923   10741 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 08:29:24.247823   10741 out.go:179] * Using Docker driver with root privileges
	I1018 08:29:24.249111   10741 cni.go:84] Creating CNI manager for ""
	I1018 08:29:24.249175   10741 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 08:29:24.249186   10741 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 08:29:24.249276   10741 start.go:349] cluster config:
	{Name:addons-757656 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-757656 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1018 08:29:24.250912   10741 out.go:179] * Starting "addons-757656" primary control-plane node in "addons-757656" cluster
	I1018 08:29:24.252082   10741 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 08:29:24.253486   10741 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 08:29:24.254844   10741 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 08:29:24.254880   10741 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 08:29:24.254887   10741 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-5897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 08:29:24.254896   10741 cache.go:58] Caching tarball of preloaded images
	I1018 08:29:24.254991   10741 preload.go:233] Found /home/jenkins/minikube-integration/21767-5897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 08:29:24.255006   10741 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 08:29:24.255418   10741 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/config.json ...
	I1018 08:29:24.255446   10741 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/config.json: {Name:mk554a6a07222424ec37abcb218df63c14178bc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:29:24.271705   10741 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1018 08:29:24.271897   10741 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory
	I1018 08:29:24.271920   10741 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory, skipping pull
	I1018 08:29:24.271926   10741 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in cache, skipping pull
	I1018 08:29:24.271938   10741 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 as a tarball
	I1018 08:29:24.271948   10741 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from local cache
	I1018 08:29:36.474287   10741 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from cached tarball
	I1018 08:29:36.474329   10741 cache.go:232] Successfully downloaded all kic artifacts
	I1018 08:29:36.474376   10741 start.go:360] acquireMachinesLock for addons-757656: {Name:mkc2473273a000321588bf99eb2b2fb8faac67ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 08:29:36.474481   10741 start.go:364] duration metric: took 84.004µs to acquireMachinesLock for "addons-757656"
	I1018 08:29:36.474511   10741 start.go:93] Provisioning new machine with config: &{Name:addons-757656 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-757656 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 08:29:36.474580   10741 start.go:125] createHost starting for "" (driver="docker")
	I1018 08:29:36.476411   10741 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1018 08:29:36.476608   10741 start.go:159] libmachine.API.Create for "addons-757656" (driver="docker")
	I1018 08:29:36.476638   10741 client.go:168] LocalClient.Create starting
	I1018 08:29:36.476754   10741 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem
	I1018 08:29:36.576204   10741 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem
	I1018 08:29:36.824033   10741 cli_runner.go:164] Run: docker network inspect addons-757656 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 08:29:36.840911   10741 cli_runner.go:211] docker network inspect addons-757656 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 08:29:36.840985   10741 network_create.go:284] running [docker network inspect addons-757656] to gather additional debugging logs...
	I1018 08:29:36.841004   10741 cli_runner.go:164] Run: docker network inspect addons-757656
	W1018 08:29:36.857274   10741 cli_runner.go:211] docker network inspect addons-757656 returned with exit code 1
	I1018 08:29:36.857301   10741 network_create.go:287] error running [docker network inspect addons-757656]: docker network inspect addons-757656: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-757656 not found
	I1018 08:29:36.857316   10741 network_create.go:289] output of [docker network inspect addons-757656]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-757656 not found
	
	** /stderr **
	I1018 08:29:36.857464   10741 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 08:29:36.874425   10741 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ce8620}
	I1018 08:29:36.874467   10741 network_create.go:124] attempt to create docker network addons-757656 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1018 08:29:36.874522   10741 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-757656 addons-757656
	I1018 08:29:36.929763   10741 network_create.go:108] docker network addons-757656 192.168.49.0/24 created
	I1018 08:29:36.929789   10741 kic.go:121] calculated static IP "192.168.49.2" for the "addons-757656" container
	I1018 08:29:36.929855   10741 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 08:29:36.946732   10741 cli_runner.go:164] Run: docker volume create addons-757656 --label name.minikube.sigs.k8s.io=addons-757656 --label created_by.minikube.sigs.k8s.io=true
	I1018 08:29:36.964867   10741 oci.go:103] Successfully created a docker volume addons-757656
	I1018 08:29:36.964944   10741 cli_runner.go:164] Run: docker run --rm --name addons-757656-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-757656 --entrypoint /usr/bin/test -v addons-757656:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 08:29:43.616506   10741 cli_runner.go:217] Completed: docker run --rm --name addons-757656-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-757656 --entrypoint /usr/bin/test -v addons-757656:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib: (6.651520905s)
	I1018 08:29:43.616534   10741 oci.go:107] Successfully prepared a docker volume addons-757656
	I1018 08:29:43.616561   10741 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 08:29:43.616584   10741 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 08:29:43.616647   10741 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-5897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-757656:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1018 08:29:48.154088   10741 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-5897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-757656:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.537376023s)
	I1018 08:29:48.154117   10741 kic.go:203] duration metric: took 4.537533415s to extract preloaded images to volume ...
	W1018 08:29:48.154192   10741 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1018 08:29:48.154219   10741 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1018 08:29:48.154250   10741 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 08:29:48.213612   10741 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-757656 --name addons-757656 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-757656 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-757656 --network addons-757656 --ip 192.168.49.2 --volume addons-757656:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 08:29:48.523840   10741 cli_runner.go:164] Run: docker container inspect addons-757656 --format={{.State.Running}}
	I1018 08:29:48.544024   10741 cli_runner.go:164] Run: docker container inspect addons-757656 --format={{.State.Status}}
	I1018 08:29:48.563155   10741 cli_runner.go:164] Run: docker exec addons-757656 stat /var/lib/dpkg/alternatives/iptables
	I1018 08:29:48.611164   10741 oci.go:144] the created container "addons-757656" has a running status.
	I1018 08:29:48.611194   10741 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21767-5897/.minikube/machines/addons-757656/id_rsa...
	I1018 08:29:48.856905   10741 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21767-5897/.minikube/machines/addons-757656/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 08:29:48.891419   10741 cli_runner.go:164] Run: docker container inspect addons-757656 --format={{.State.Status}}
	I1018 08:29:48.912598   10741 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 08:29:48.912618   10741 kic_runner.go:114] Args: [docker exec --privileged addons-757656 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 08:29:48.965154   10741 cli_runner.go:164] Run: docker container inspect addons-757656 --format={{.State.Status}}
	I1018 08:29:48.985009   10741 machine.go:93] provisionDockerMachine start ...
	I1018 08:29:48.985083   10741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:29:49.005164   10741 main.go:141] libmachine: Using SSH client type: native
	I1018 08:29:49.005493   10741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1018 08:29:49.005510   10741 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 08:29:49.138104   10741 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-757656
	
	I1018 08:29:49.138130   10741 ubuntu.go:182] provisioning hostname "addons-757656"
	I1018 08:29:49.138193   10741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:29:49.157056   10741 main.go:141] libmachine: Using SSH client type: native
	I1018 08:29:49.157269   10741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1018 08:29:49.157286   10741 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-757656 && echo "addons-757656" | sudo tee /etc/hostname
	I1018 08:29:49.298773   10741 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-757656
	
	I1018 08:29:49.298846   10741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:29:49.316764   10741 main.go:141] libmachine: Using SSH client type: native
	I1018 08:29:49.316985   10741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1018 08:29:49.317005   10741 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-757656' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-757656/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-757656' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 08:29:49.448892   10741 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 08:29:49.448921   10741 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-5897/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-5897/.minikube}
	I1018 08:29:49.448957   10741 ubuntu.go:190] setting up certificates
	I1018 08:29:49.448974   10741 provision.go:84] configureAuth start
	I1018 08:29:49.449022   10741 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-757656
	I1018 08:29:49.466987   10741 provision.go:143] copyHostCerts
	I1018 08:29:49.467071   10741 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem (1078 bytes)
	I1018 08:29:49.467208   10741 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem (1123 bytes)
	I1018 08:29:49.467293   10741 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem (1675 bytes)
	I1018 08:29:49.467383   10741 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem org=jenkins.addons-757656 san=[127.0.0.1 192.168.49.2 addons-757656 localhost minikube]
	I1018 08:29:49.557013   10741 provision.go:177] copyRemoteCerts
	I1018 08:29:49.557068   10741 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 08:29:49.557102   10741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:29:49.575013   10741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/addons-757656/id_rsa Username:docker}
	I1018 08:29:49.670675   10741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 08:29:49.689905   10741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1018 08:29:49.707122   10741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 08:29:49.723960   10741 provision.go:87] duration metric: took 274.97423ms to configureAuth
	I1018 08:29:49.723989   10741 ubuntu.go:206] setting minikube options for container-runtime
	I1018 08:29:49.724161   10741 config.go:182] Loaded profile config "addons-757656": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:29:49.724269   10741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:29:49.741964   10741 main.go:141] libmachine: Using SSH client type: native
	I1018 08:29:49.742239   10741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1018 08:29:49.742265   10741 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 08:29:49.982807   10741 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 08:29:49.982829   10741 machine.go:96] duration metric: took 997.800508ms to provisionDockerMachine
	I1018 08:29:49.982839   10741 client.go:171] duration metric: took 13.506192733s to LocalClient.Create
	I1018 08:29:49.982853   10741 start.go:167] duration metric: took 13.506246522s to libmachine.API.Create "addons-757656"
	I1018 08:29:49.982860   10741 start.go:293] postStartSetup for "addons-757656" (driver="docker")
	I1018 08:29:49.982873   10741 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 08:29:49.982927   10741 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 08:29:49.982973   10741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:29:50.000659   10741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/addons-757656/id_rsa Username:docker}
	I1018 08:29:50.100064   10741 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 08:29:50.103735   10741 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 08:29:50.103761   10741 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 08:29:50.103774   10741 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-5897/.minikube/addons for local assets ...
	I1018 08:29:50.103843   10741 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-5897/.minikube/files for local assets ...
	I1018 08:29:50.103870   10741 start.go:296] duration metric: took 121.003691ms for postStartSetup
	I1018 08:29:50.104291   10741 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-757656
	I1018 08:29:50.121985   10741 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/config.json ...
	I1018 08:29:50.122240   10741 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 08:29:50.122279   10741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:29:50.139796   10741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/addons-757656/id_rsa Username:docker}
	I1018 08:29:50.232289   10741 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 08:29:50.236939   10741 start.go:128] duration metric: took 13.762347082s to createHost
	I1018 08:29:50.236957   10741 start.go:83] releasing machines lock for "addons-757656", held for 13.762463086s
	I1018 08:29:50.237008   10741 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-757656
	I1018 08:29:50.254404   10741 ssh_runner.go:195] Run: cat /version.json
	I1018 08:29:50.254446   10741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:29:50.254490   10741 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 08:29:50.254545   10741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:29:50.273912   10741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/addons-757656/id_rsa Username:docker}
	I1018 08:29:50.273925   10741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/addons-757656/id_rsa Username:docker}
	I1018 08:29:50.365224   10741 ssh_runner.go:195] Run: systemctl --version
	I1018 08:29:50.421536   10741 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 08:29:50.455321   10741 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 08:29:50.459890   10741 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 08:29:50.459961   10741 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 08:29:50.484894   10741 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1018 08:29:50.484914   10741 start.go:495] detecting cgroup driver to use...
	I1018 08:29:50.484942   10741 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 08:29:50.484979   10741 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 08:29:50.500755   10741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 08:29:50.513157   10741 docker.go:218] disabling cri-docker service (if available) ...
	I1018 08:29:50.513207   10741 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 08:29:50.529683   10741 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 08:29:50.547058   10741 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 08:29:50.629978   10741 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 08:29:50.716529   10741 docker.go:234] disabling docker service ...
	I1018 08:29:50.716593   10741 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 08:29:50.734187   10741 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 08:29:50.746944   10741 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 08:29:50.828933   10741 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 08:29:50.908510   10741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 08:29:50.920572   10741 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 08:29:50.934308   10741 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 08:29:50.934391   10741 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 08:29:50.944610   10741 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 08:29:50.944666   10741 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 08:29:50.954214   10741 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 08:29:50.962850   10741 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 08:29:50.971381   10741 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 08:29:50.979559   10741 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 08:29:50.987933   10741 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 08:29:51.000899   10741 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 08:29:51.009761   10741 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 08:29:51.017120   10741 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1018 08:29:51.017165   10741 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1018 08:29:51.029195   10741 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 08:29:51.036832   10741 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 08:29:51.113393   10741 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 08:29:51.209520   10741 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 08:29:51.209584   10741 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 08:29:51.213495   10741 start.go:563] Will wait 60s for crictl version
	I1018 08:29:51.213557   10741 ssh_runner.go:195] Run: which crictl
	I1018 08:29:51.217048   10741 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 08:29:51.241017   10741 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 08:29:51.241153   10741 ssh_runner.go:195] Run: crio --version
	I1018 08:29:51.267919   10741 ssh_runner.go:195] Run: crio --version
	I1018 08:29:51.296632   10741 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 08:29:51.297909   10741 cli_runner.go:164] Run: docker network inspect addons-757656 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 08:29:51.315942   10741 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1018 08:29:51.319960   10741 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 08:29:51.330156   10741 kubeadm.go:883] updating cluster {Name:addons-757656 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-757656 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 08:29:51.330289   10741 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 08:29:51.330396   10741 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 08:29:51.360372   10741 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 08:29:51.360391   10741 crio.go:433] Images already preloaded, skipping extraction
	I1018 08:29:51.360433   10741 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 08:29:51.386960   10741 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 08:29:51.386985   10741 cache_images.go:85] Images are preloaded, skipping loading
	I1018 08:29:51.386993   10741 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1018 08:29:51.387089   10741 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-757656 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-757656 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 08:29:51.387165   10741 ssh_runner.go:195] Run: crio config
	I1018 08:29:51.431508   10741 cni.go:84] Creating CNI manager for ""
	I1018 08:29:51.431532   10741 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 08:29:51.431548   10741 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 08:29:51.431567   10741 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-757656 NodeName:addons-757656 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 08:29:51.431678   10741 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-757656"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 08:29:51.431733   10741 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 08:29:51.439689   10741 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 08:29:51.439761   10741 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 08:29:51.447468   10741 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1018 08:29:51.460072   10741 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 08:29:51.475088   10741 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1018 08:29:51.487925   10741 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1018 08:29:51.491525   10741 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 08:29:51.501457   10741 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 08:29:51.579367   10741 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 08:29:51.605813   10741 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656 for IP: 192.168.49.2
	I1018 08:29:51.605834   10741 certs.go:195] generating shared ca certs ...
	I1018 08:29:51.605853   10741 certs.go:227] acquiring lock for ca certs: {Name:mk550b60d986fbbdf7b5e0015c56234b739f3162 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:29:51.605989   10741 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-5897/.minikube/ca.key
	I1018 08:29:51.827085   10741 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-5897/.minikube/ca.crt ...
	I1018 08:29:51.827115   10741 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/.minikube/ca.crt: {Name:mk28a5ba0a34efca8afa23abdcf9ad584c7103de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:29:51.827294   10741 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-5897/.minikube/ca.key ...
	I1018 08:29:51.827305   10741 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/.minikube/ca.key: {Name:mk2fe1cb6618b0f657685c882ef4773999853869 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:29:51.827405   10741 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.key
	I1018 08:29:51.867574   10741 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.crt ...
	I1018 08:29:51.867604   10741 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.crt: {Name:mk4e910ca84ebcb66150ba18f5dfb85c9254b593 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:29:51.867769   10741 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.key ...
	I1018 08:29:51.867781   10741 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.key: {Name:mk0293506165d04a11676a23b18a8df4817f4410 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:29:51.867851   10741 certs.go:257] generating profile certs ...
	I1018 08:29:51.867902   10741 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/client.key
	I1018 08:29:51.867916   10741 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/client.crt with IP's: []
	I1018 08:29:52.096824   10741 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/client.crt ...
	I1018 08:29:52.096854   10741 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/client.crt: {Name:mk9c4099f9162ba4e2a1492118f57f87701bc8c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:29:52.097040   10741 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/client.key ...
	I1018 08:29:52.097053   10741 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/client.key: {Name:mke4a978a87f2b4173a66dd618c8e26416d6b3a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:29:52.097125   10741 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/apiserver.key.7277db9b
	I1018 08:29:52.097153   10741 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/apiserver.crt.7277db9b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1018 08:29:52.269189   10741 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/apiserver.crt.7277db9b ...
	I1018 08:29:52.269229   10741 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/apiserver.crt.7277db9b: {Name:mkaa4ce119869d0402d8da221a60e8e2659b444a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:29:52.269395   10741 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/apiserver.key.7277db9b ...
	I1018 08:29:52.269408   10741 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/apiserver.key.7277db9b: {Name:mkd572150577d72d661f13406eee9a2c31731770 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:29:52.269489   10741 certs.go:382] copying /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/apiserver.crt.7277db9b -> /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/apiserver.crt
	I1018 08:29:52.269566   10741 certs.go:386] copying /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/apiserver.key.7277db9b -> /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/apiserver.key
	I1018 08:29:52.269618   10741 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/proxy-client.key
	I1018 08:29:52.269636   10741 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/proxy-client.crt with IP's: []
	I1018 08:29:52.384935   10741 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/proxy-client.crt ...
	I1018 08:29:52.384964   10741 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/proxy-client.crt: {Name:mkd7d0cff0b7dd9f28e2e44206989f9df30cab10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:29:52.385131   10741 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/proxy-client.key ...
	I1018 08:29:52.385143   10741 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/proxy-client.key: {Name:mk9045e5a26bc5acfac279609bef21ae5373c7c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:29:52.385324   10741 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 08:29:52.385368   10741 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem (1078 bytes)
	I1018 08:29:52.385393   10741 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem (1123 bytes)
	I1018 08:29:52.385414   10741 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem (1675 bytes)
	I1018 08:29:52.385922   10741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 08:29:52.403990   10741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 08:29:52.421369   10741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 08:29:52.438488   10741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 08:29:52.455171   10741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1018 08:29:52.472259   10741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 08:29:52.489309   10741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 08:29:52.506488   10741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 08:29:52.523670   10741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 08:29:52.542477   10741 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 08:29:52.554993   10741 ssh_runner.go:195] Run: openssl version
	I1018 08:29:52.561085   10741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 08:29:52.572057   10741 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 08:29:52.575837   10741 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1018 08:29:52.575900   10741 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 08:29:52.609611   10741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 08:29:52.618548   10741 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 08:29:52.622064   10741 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 08:29:52.622116   10741 kubeadm.go:400] StartCluster: {Name:addons-757656 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-757656 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 08:29:52.622196   10741 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 08:29:52.622272   10741 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 08:29:52.647426   10741 cri.go:89] found id: ""
	I1018 08:29:52.647500   10741 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 08:29:52.655400   10741 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 08:29:52.663328   10741 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 08:29:52.663444   10741 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 08:29:52.670965   10741 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 08:29:52.670985   10741 kubeadm.go:157] found existing configuration files:
	
	I1018 08:29:52.671030   10741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 08:29:52.678405   10741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 08:29:52.678455   10741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 08:29:52.685933   10741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 08:29:52.693498   10741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 08:29:52.693544   10741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 08:29:52.700831   10741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 08:29:52.708265   10741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 08:29:52.708331   10741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 08:29:52.715620   10741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 08:29:52.723144   10741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 08:29:52.723214   10741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 08:29:52.730606   10741 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 08:29:52.766540   10741 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 08:29:52.766620   10741 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 08:29:52.787577   10741 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 08:29:52.787652   10741 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1018 08:29:52.787693   10741 kubeadm.go:318] OS: Linux
	I1018 08:29:52.787734   10741 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 08:29:52.787802   10741 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 08:29:52.787872   10741 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 08:29:52.787943   10741 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 08:29:52.788014   10741 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 08:29:52.788088   10741 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 08:29:52.788161   10741 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 08:29:52.788232   10741 kubeadm.go:318] CGROUPS_IO: enabled
	I1018 08:29:52.842883   10741 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 08:29:52.843033   10741 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 08:29:52.843185   10741 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 08:29:52.850569   10741 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 08:29:52.852509   10741 out.go:252]   - Generating certificates and keys ...
	I1018 08:29:52.852638   10741 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 08:29:52.852737   10741 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 08:29:53.320852   10741 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 08:29:53.747842   10741 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 08:29:54.250527   10741 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 08:29:54.549979   10741 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 08:29:54.708478   10741 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 08:29:54.708607   10741 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-757656 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1018 08:29:54.753599   10741 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 08:29:54.753772   10741 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-757656 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1018 08:29:55.181818   10741 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 08:29:55.494491   10741 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 08:29:55.717872   10741 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 08:29:55.717937   10741 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 08:29:55.975586   10741 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 08:29:56.311297   10741 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 08:29:56.379102   10741 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 08:29:56.696014   10741 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 08:29:57.154904   10741 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 08:29:57.155360   10741 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 08:29:57.159451   10741 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 08:29:57.160840   10741 out.go:252]   - Booting up control plane ...
	I1018 08:29:57.160957   10741 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 08:29:57.161064   10741 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 08:29:57.161525   10741 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 08:29:57.174906   10741 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 08:29:57.175041   10741 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 08:29:57.181808   10741 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 08:29:57.182026   10741 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 08:29:57.182112   10741 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 08:29:57.280521   10741 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 08:29:57.280748   10741 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 08:29:58.281073   10741 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000734122s
	I1018 08:29:58.283810   10741 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 08:29:58.283945   10741 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1018 08:29:58.284075   10741 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 08:29:58.284193   10741 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 08:29:59.821569   10741 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.537650296s
	I1018 08:30:00.186887   10741 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 1.90300259s
	I1018 08:30:01.785832   10741 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 3.501975091s
	I1018 08:30:01.798590   10741 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 08:30:01.810692   10741 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 08:30:01.819762   10741 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 08:30:01.820029   10741 kubeadm.go:318] [mark-control-plane] Marking the node addons-757656 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 08:30:01.828425   10741 kubeadm.go:318] [bootstrap-token] Using token: j5k97x.1ffhdgaf3x41p7vg
	I1018 08:30:01.830052   10741 out.go:252]   - Configuring RBAC rules ...
	I1018 08:30:01.830179   10741 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 08:30:01.833640   10741 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 08:30:01.839023   10741 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 08:30:01.841477   10741 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 08:30:01.843880   10741 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 08:30:01.847485   10741 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 08:30:02.193073   10741 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 08:30:02.609987   10741 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 08:30:03.193295   10741 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 08:30:03.194097   10741 kubeadm.go:318] 
	I1018 08:30:03.194179   10741 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 08:30:03.194197   10741 kubeadm.go:318] 
	I1018 08:30:03.194320   10741 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 08:30:03.194334   10741 kubeadm.go:318] 
	I1018 08:30:03.194384   10741 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 08:30:03.194478   10741 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 08:30:03.194553   10741 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 08:30:03.194563   10741 kubeadm.go:318] 
	I1018 08:30:03.194639   10741 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 08:30:03.194651   10741 kubeadm.go:318] 
	I1018 08:30:03.194710   10741 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 08:30:03.194721   10741 kubeadm.go:318] 
	I1018 08:30:03.194770   10741 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 08:30:03.194834   10741 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 08:30:03.194896   10741 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 08:30:03.194918   10741 kubeadm.go:318] 
	I1018 08:30:03.194991   10741 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 08:30:03.195063   10741 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 08:30:03.195074   10741 kubeadm.go:318] 
	I1018 08:30:03.195144   10741 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token j5k97x.1ffhdgaf3x41p7vg \
	I1018 08:30:03.195234   10741 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:03f732b5d900f8eb7de41cf71a6356f3c4edf03d7a3795a959179e2391e7734f \
	I1018 08:30:03.195260   10741 kubeadm.go:318] 	--control-plane 
	I1018 08:30:03.195267   10741 kubeadm.go:318] 
	I1018 08:30:03.195338   10741 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 08:30:03.195366   10741 kubeadm.go:318] 
	I1018 08:30:03.195450   10741 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token j5k97x.1ffhdgaf3x41p7vg \
	I1018 08:30:03.195610   10741 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:03f732b5d900f8eb7de41cf71a6356f3c4edf03d7a3795a959179e2391e7734f 
	I1018 08:30:03.197879   10741 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1018 08:30:03.198042   10741 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 08:30:03.198082   10741 cni.go:84] Creating CNI manager for ""
	I1018 08:30:03.198097   10741 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 08:30:03.200045   10741 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 08:30:03.201256   10741 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 08:30:03.205587   10741 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 08:30:03.205603   10741 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 08:30:03.218621   10741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 08:30:03.418569   10741 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 08:30:03.418713   10741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:30:03.418749   10741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-757656 minikube.k8s.io/updated_at=2025_10_18T08_30_03_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2a39cecdc22b5fb611b15c7501c7459c3b4d2820 minikube.k8s.io/name=addons-757656 minikube.k8s.io/primary=true
	I1018 08:30:03.427864   10741 ops.go:34] apiserver oom_adj: -16
	I1018 08:30:03.497839   10741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:30:03.998546   10741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:30:04.498441   10741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:30:04.998074   10741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:30:05.498611   10741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:30:05.998008   10741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:30:06.498214   10741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:30:06.998558   10741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:30:07.498434   10741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:30:07.998557   10741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:30:08.061772   10741 kubeadm.go:1113] duration metric: took 4.643120777s to wait for elevateKubeSystemPrivileges
	I1018 08:30:08.061816   10741 kubeadm.go:402] duration metric: took 15.439702063s to StartCluster
	I1018 08:30:08.061839   10741 settings.go:142] acquiring lock: {Name:mk177870d6cf7000f95346d8b9c104ade730278a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:30:08.061968   10741 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-5897/kubeconfig
	I1018 08:30:08.062361   10741 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/kubeconfig: {Name:mkbdf22e0f6c3f9f36e4cde3352b620d43ace448 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:30:08.062569   10741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 08:30:08.062579   10741 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 08:30:08.062639   10741 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1018 08:30:08.062782   10741 addons.go:69] Setting yakd=true in profile "addons-757656"
	I1018 08:30:08.062802   10741 addons.go:69] Setting registry-creds=true in profile "addons-757656"
	I1018 08:30:08.062816   10741 addons.go:69] Setting storage-provisioner=true in profile "addons-757656"
	I1018 08:30:08.062819   10741 addons.go:69] Setting gcp-auth=true in profile "addons-757656"
	I1018 08:30:08.062828   10741 addons.go:238] Setting addon registry-creds=true in "addons-757656"
	I1018 08:30:08.062843   10741 addons.go:238] Setting addon storage-provisioner=true in "addons-757656"
	I1018 08:30:08.062859   10741 mustload.go:65] Loading cluster: addons-757656
	I1018 08:30:08.062871   10741 host.go:66] Checking if "addons-757656" exists ...
	I1018 08:30:08.062861   10741 addons.go:69] Setting default-storageclass=true in profile "addons-757656"
	I1018 08:30:08.062901   10741 addons.go:69] Setting registry=true in profile "addons-757656"
	I1018 08:30:08.062913   10741 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-757656"
	I1018 08:30:08.062899   10741 addons.go:69] Setting ingress=true in profile "addons-757656"
	I1018 08:30:08.062913   10741 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-757656"
	I1018 08:30:08.062949   10741 addons.go:238] Setting addon registry=true in "addons-757656"
	I1018 08:30:08.062952   10741 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-757656"
	I1018 08:30:08.062968   10741 host.go:66] Checking if "addons-757656" exists ...
	I1018 08:30:08.062972   10741 addons.go:238] Setting addon ingress=true in "addons-757656"
	I1018 08:30:08.063004   10741 host.go:66] Checking if "addons-757656" exists ...
	I1018 08:30:08.063004   10741 addons.go:69] Setting volcano=true in profile "addons-757656"
	I1018 08:30:08.063019   10741 addons.go:238] Setting addon volcano=true in "addons-757656"
	I1018 08:30:08.063034   10741 addons.go:69] Setting ingress-dns=true in profile "addons-757656"
	I1018 08:30:08.063036   10741 host.go:66] Checking if "addons-757656" exists ...
	I1018 08:30:08.063042   10741 host.go:66] Checking if "addons-757656" exists ...
	I1018 08:30:08.063047   10741 addons.go:238] Setting addon ingress-dns=true in "addons-757656"
	I1018 08:30:08.063065   10741 config.go:182] Loaded profile config "addons-757656": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:30:08.063083   10741 host.go:66] Checking if "addons-757656" exists ...
	I1018 08:30:08.063308   10741 cli_runner.go:164] Run: docker container inspect addons-757656 --format={{.State.Status}}
	I1018 08:30:08.063377   10741 cli_runner.go:164] Run: docker container inspect addons-757656 --format={{.State.Status}}
	I1018 08:30:08.063471   10741 cli_runner.go:164] Run: docker container inspect addons-757656 --format={{.State.Status}}
	I1018 08:30:08.063486   10741 cli_runner.go:164] Run: docker container inspect addons-757656 --format={{.State.Status}}
	I1018 08:30:08.063516   10741 cli_runner.go:164] Run: docker container inspect addons-757656 --format={{.State.Status}}
	I1018 08:30:08.063525   10741 cli_runner.go:164] Run: docker container inspect addons-757656 --format={{.State.Status}}
	I1018 08:30:08.063567   10741 cli_runner.go:164] Run: docker container inspect addons-757656 --format={{.State.Status}}
	I1018 08:30:08.063598   10741 cli_runner.go:164] Run: docker container inspect addons-757656 --format={{.State.Status}}
	I1018 08:30:08.062807   10741 addons.go:238] Setting addon yakd=true in "addons-757656"
	I1018 08:30:08.063931   10741 host.go:66] Checking if "addons-757656" exists ...
	I1018 08:30:08.064191   10741 addons.go:69] Setting volumesnapshots=true in profile "addons-757656"
	I1018 08:30:08.064224   10741 addons.go:238] Setting addon volumesnapshots=true in "addons-757656"
	I1018 08:30:08.064251   10741 addons.go:69] Setting cloud-spanner=true in profile "addons-757656"
	I1018 08:30:08.064263   10741 host.go:66] Checking if "addons-757656" exists ...
	I1018 08:30:08.064271   10741 addons.go:238] Setting addon cloud-spanner=true in "addons-757656"
	I1018 08:30:08.064320   10741 host.go:66] Checking if "addons-757656" exists ...
	I1018 08:30:08.064454   10741 cli_runner.go:164] Run: docker container inspect addons-757656 --format={{.State.Status}}
	I1018 08:30:08.064946   10741 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-757656"
	I1018 08:30:08.064972   10741 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-757656"
	I1018 08:30:08.064974   10741 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-757656"
	I1018 08:30:08.064990   10741 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-757656"
	I1018 08:30:08.065010   10741 host.go:66] Checking if "addons-757656" exists ...
	I1018 08:30:08.065338   10741 addons.go:69] Setting metrics-server=true in profile "addons-757656"
	I1018 08:30:08.065374   10741 addons.go:238] Setting addon metrics-server=true in "addons-757656"
	I1018 08:30:08.065385   10741 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-757656"
	I1018 08:30:08.065400   10741 host.go:66] Checking if "addons-757656" exists ...
	I1018 08:30:08.065448   10741 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-757656"
	I1018 08:30:08.065481   10741 host.go:66] Checking if "addons-757656" exists ...
	I1018 08:30:08.062786   10741 config.go:182] Loaded profile config "addons-757656": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:30:08.062891   10741 host.go:66] Checking if "addons-757656" exists ...
	I1018 08:30:08.066026   10741 out.go:179] * Verifying Kubernetes components...
	I1018 08:30:08.062802   10741 addons.go:69] Setting inspektor-gadget=true in profile "addons-757656"
	I1018 08:30:08.066305   10741 addons.go:238] Setting addon inspektor-gadget=true in "addons-757656"
	I1018 08:30:08.066366   10741 host.go:66] Checking if "addons-757656" exists ...
	I1018 08:30:08.067307   10741 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 08:30:08.076030   10741 cli_runner.go:164] Run: docker container inspect addons-757656 --format={{.State.Status}}
	I1018 08:30:08.076052   10741 cli_runner.go:164] Run: docker container inspect addons-757656 --format={{.State.Status}}
	I1018 08:30:08.076060   10741 cli_runner.go:164] Run: docker container inspect addons-757656 --format={{.State.Status}}
	I1018 08:30:08.076897   10741 cli_runner.go:164] Run: docker container inspect addons-757656 --format={{.State.Status}}
	I1018 08:30:08.077252   10741 cli_runner.go:164] Run: docker container inspect addons-757656 --format={{.State.Status}}
	I1018 08:30:08.076032   10741 cli_runner.go:164] Run: docker container inspect addons-757656 --format={{.State.Status}}
	I1018 08:30:08.079961   10741 cli_runner.go:164] Run: docker container inspect addons-757656 --format={{.State.Status}}
	I1018 08:30:08.080371   10741 cli_runner.go:164] Run: docker container inspect addons-757656 --format={{.State.Status}}
	I1018 08:30:08.115504   10741 addons.go:238] Setting addon default-storageclass=true in "addons-757656"
	I1018 08:30:08.115556   10741 host.go:66] Checking if "addons-757656" exists ...
	I1018 08:30:08.116025   10741 cli_runner.go:164] Run: docker container inspect addons-757656 --format={{.State.Status}}
	I1018 08:30:08.119950   10741 host.go:66] Checking if "addons-757656" exists ...
	I1018 08:30:08.132184   10741 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 08:30:08.133886   10741 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 08:30:08.133914   10741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 08:30:08.134015   10741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:30:08.143614   10741 out.go:179]   - Using image docker.io/registry:3.0.0
	I1018 08:30:08.145710   10741 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1018 08:30:08.143614   10741 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1018 08:30:08.149055   10741 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1018 08:30:08.149072   10741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1018 08:30:08.149134   10741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:30:08.149440   10741 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1018 08:30:08.149458   10741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1018 08:30:08.149514   10741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:30:08.153746   10741 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1018 08:30:08.158087   10741 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1018 08:30:08.158113   10741 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1018 08:30:08.158186   10741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:30:08.159026   10741 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 08:30:08.159040   10741 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 08:30:08.159110   10741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:30:08.168681   10741 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1018 08:30:08.175939   10741 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	W1018 08:30:08.177320   10741 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1018 08:30:08.177876   10741 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1018 08:30:08.178060   10741 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1018 08:30:08.178274   10741 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1018 08:30:08.178288   10741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1018 08:30:08.178359   10741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:30:08.179997   10741 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1018 08:30:08.180010   10741 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1018 08:30:08.180068   10741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:30:08.180788   10741 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1018 08:30:08.181965   10741 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1018 08:30:08.183357   10741 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1018 08:30:08.185079   10741 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1018 08:30:08.186420   10741 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1018 08:30:08.190595   10741 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1018 08:30:08.191653   10741 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1018 08:30:08.191677   10741 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1018 08:30:08.191747   10741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:30:08.200769   10741 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1018 08:30:08.201961   10741 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1018 08:30:08.201980   10741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1018 08:30:08.202034   10741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:30:08.204075   10741 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1018 08:30:08.205063   10741 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1018 08:30:08.205080   10741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1018 08:30:08.205143   10741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:30:08.208759   10741 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1018 08:30:08.208765   10741 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1018 08:30:08.209701   10741 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1018 08:30:08.209721   10741 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1018 08:30:08.209775   10741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:30:08.209963   10741 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1018 08:30:08.209974   10741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1018 08:30:08.210014   10741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:30:08.212104   10741 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 08:30:08.214481   10741 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 08:30:08.215487   10741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 08:30:08.216535   10741 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1018 08:30:08.220194   10741 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1018 08:30:08.220732   10741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1018 08:30:08.220945   10741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:30:08.222113   10741 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-757656"
	I1018 08:30:08.222633   10741 host.go:66] Checking if "addons-757656" exists ...
	I1018 08:30:08.223105   10741 cli_runner.go:164] Run: docker container inspect addons-757656 --format={{.State.Status}}
	I1018 08:30:08.236661   10741 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1018 08:30:08.238385   10741 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1018 08:30:08.238433   10741 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1018 08:30:08.238536   10741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:30:08.243897   10741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/addons-757656/id_rsa Username:docker}
	I1018 08:30:08.245020   10741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/addons-757656/id_rsa Username:docker}
	I1018 08:30:08.264066   10741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/addons-757656/id_rsa Username:docker}
	I1018 08:30:08.265003   10741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/addons-757656/id_rsa Username:docker}
	I1018 08:30:08.271060   10741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/addons-757656/id_rsa Username:docker}
	I1018 08:30:08.282487   10741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/addons-757656/id_rsa Username:docker}
	I1018 08:30:08.284387   10741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/addons-757656/id_rsa Username:docker}
	I1018 08:30:08.285787   10741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/addons-757656/id_rsa Username:docker}
	I1018 08:30:08.292444   10741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/addons-757656/id_rsa Username:docker}
	I1018 08:30:08.295147   10741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/addons-757656/id_rsa Username:docker}
	I1018 08:30:08.296149   10741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/addons-757656/id_rsa Username:docker}
	I1018 08:30:08.301953   10741 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1018 08:30:08.303214   10741 out.go:179]   - Using image docker.io/busybox:stable
	I1018 08:30:08.304289   10741 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1018 08:30:08.304327   10741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1018 08:30:08.304409   10741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:30:08.306229   10741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/addons-757656/id_rsa Username:docker}
	W1018 08:30:08.307500   10741 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1018 08:30:08.307543   10741 retry.go:31] will retry after 353.832763ms: ssh: handshake failed: EOF
	I1018 08:30:08.313957   10741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/addons-757656/id_rsa Username:docker}
	I1018 08:30:08.316782   10741 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 08:30:08.323432   10741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/addons-757656/id_rsa Username:docker}
	I1018 08:30:08.337600   10741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/addons-757656/id_rsa Username:docker}
	I1018 08:30:08.434002   10741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 08:30:08.434383   10741 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1018 08:30:08.434405   10741 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1018 08:30:08.436957   10741 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:30:08.436982   10741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1018 08:30:08.440330   10741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1018 08:30:08.456506   10741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:30:08.456664   10741 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1018 08:30:08.456680   10741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1018 08:30:08.460063   10741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1018 08:30:08.462786   10741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1018 08:30:08.487136   10741 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1018 08:30:08.487214   10741 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1018 08:30:08.487661   10741 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1018 08:30:08.487680   10741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1018 08:30:08.489040   10741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 08:30:08.490624   10741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1018 08:30:08.498029   10741 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1018 08:30:08.498054   10741 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1018 08:30:08.498080   10741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1018 08:30:08.503599   10741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1018 08:30:08.518671   10741 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1018 08:30:08.518696   10741 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1018 08:30:08.520662   10741 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1018 08:30:08.520681   10741 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1018 08:30:08.520933   10741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1018 08:30:08.529853   10741 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1018 08:30:08.529879   10741 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1018 08:30:08.540482   10741 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1018 08:30:08.540521   10741 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1018 08:30:08.563145   10741 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1018 08:30:08.563178   10741 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1018 08:30:08.566194   10741 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1018 08:30:08.566220   10741 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1018 08:30:08.571381   10741 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1018 08:30:08.571402   10741 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1018 08:30:08.604181   10741 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1018 08:30:08.604207   10741 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1018 08:30:08.613952   10741 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1018 08:30:08.613979   10741 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1018 08:30:08.622399   10741 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1018 08:30:08.622444   10741 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1018 08:30:08.630556   10741 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1018 08:30:08.630578   10741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1018 08:30:08.673202   10741 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 08:30:08.673292   10741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1018 08:30:08.684363   10741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1018 08:30:08.689130   10741 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1018 08:30:08.691582   10741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1018 08:30:08.691703   10741 node_ready.go:35] waiting up to 6m0s for node "addons-757656" to be "Ready" ...
	I1018 08:30:08.707221   10741 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1018 08:30:08.707254   10741 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1018 08:30:08.725222   10741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 08:30:08.778410   10741 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1018 08:30:08.778440   10741 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1018 08:30:08.810635   10741 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1018 08:30:08.810676   10741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1018 08:30:08.857616   10741 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1018 08:30:08.857658   10741 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1018 08:30:08.878004   10741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1018 08:30:08.913880   10741 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1018 08:30:08.913909   10741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1018 08:30:08.966854   10741 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1018 08:30:08.966884   10741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1018 08:30:09.025897   10741 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1018 08:30:09.025933   10741 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1018 08:30:09.060786   10741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1018 08:30:09.205398   10741 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-757656" context rescaled to 1 replicas
	W1018 08:30:09.380173   10741 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:09.380206   10741 retry.go:31] will retry after 348.936188ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:09.380277   10741 addons.go:479] Verifying addon registry=true in "addons-757656"
	I1018 08:30:09.382639   10741 out.go:179] * Verifying registry addon...
	I1018 08:30:09.386413   10741 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1018 08:30:09.394136   10741 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1018 08:30:09.394164   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1018 08:30:09.397613   10741 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1018 08:30:09.700863   10741 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.17989307s)
	I1018 08:30:09.700916   10741 addons.go:479] Verifying addon ingress=true in "addons-757656"
	I1018 08:30:09.701039   10741 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.016639263s)
	I1018 08:30:09.701064   10741 addons.go:479] Verifying addon metrics-server=true in "addons-757656"
	I1018 08:30:09.701143   10741 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.009479928s)
	I1018 08:30:09.702843   10741 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-757656 service yakd-dashboard -n yakd-dashboard
	
	I1018 08:30:09.702859   10741 out.go:179] * Verifying ingress addon...
	I1018 08:30:09.705486   10741 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1018 08:30:09.708308   10741 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1018 08:30:09.708324   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:09.730019   10741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:30:09.889892   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:10.146472   10741 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.268430636s)
	I1018 08:30:10.146480   10741 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.421211137s)
	W1018 08:30:10.146535   10741 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1018 08:30:10.146567   10741 retry.go:31] will retry after 270.293079ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1018 08:30:10.146697   10741 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.085874503s)
	I1018 08:30:10.146723   10741 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-757656"
	I1018 08:30:10.148130   10741 out.go:179] * Verifying csi-hostpath-driver addon...
	I1018 08:30:10.150570   10741 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1018 08:30:10.152985   10741 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1018 08:30:10.153009   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:10.208289   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:10.389378   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1018 08:30:10.404827   10741 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:10.404859   10741 retry.go:31] will retry after 420.124511ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:10.417954   10741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 08:30:10.654608   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 08:30:10.695017   10741 node_ready.go:57] node "addons-757656" has "Ready":"False" status (will retry)
	I1018 08:30:10.708615   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:10.826041   10741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:30:10.889464   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:11.153614   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:11.254890   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:11.390187   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:11.653778   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:11.708970   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:11.889774   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:12.154063   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:12.208094   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:12.389412   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:12.653929   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:12.754476   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:12.878483   10741 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.460476362s)
	I1018 08:30:12.878541   10741 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.052463839s)
	W1018 08:30:12.878580   10741 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:12.878600   10741 retry.go:31] will retry after 406.363652ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:12.889244   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:13.153812   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 08:30:13.195269   10741 node_ready.go:57] node "addons-757656" has "Ready":"False" status (will retry)
	I1018 08:30:13.254603   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:13.285588   10741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:30:13.389244   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:13.654367   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:13.708492   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 08:30:13.823365   10741 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:13.823395   10741 retry.go:31] will retry after 811.525025ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:13.890324   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:14.154583   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:14.208500   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:14.389892   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:14.635522   10741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:30:14.654065   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:14.709215   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:14.889335   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:15.154083   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 08:30:15.170761   10741 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:15.170788   10741 retry.go:31] will retry after 1.254858149s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:15.208842   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:15.389887   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:15.654194   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 08:30:15.694592   10741 node_ready.go:57] node "addons-757656" has "Ready":"False" status (will retry)
	I1018 08:30:15.709007   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:15.726209   10741 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1018 08:30:15.726284   10741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:30:15.744076   10741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/addons-757656/id_rsa Username:docker}
	I1018 08:30:15.845872   10741 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1018 08:30:15.859016   10741 addons.go:238] Setting addon gcp-auth=true in "addons-757656"
	I1018 08:30:15.859077   10741 host.go:66] Checking if "addons-757656" exists ...
	I1018 08:30:15.859466   10741 cli_runner.go:164] Run: docker container inspect addons-757656 --format={{.State.Status}}
	I1018 08:30:15.877988   10741 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1018 08:30:15.878043   10741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:30:15.890079   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:15.896291   10741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/addons-757656/id_rsa Username:docker}
	I1018 08:30:15.990374   10741 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 08:30:15.991673   10741 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1018 08:30:15.992751   10741 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1018 08:30:15.992783   10741 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1018 08:30:16.006748   10741 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1018 08:30:16.006768   10741 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1018 08:30:16.019865   10741 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1018 08:30:16.019887   10741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1018 08:30:16.032783   10741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1018 08:30:16.153964   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:16.209162   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:16.344643   10741 addons.go:479] Verifying addon gcp-auth=true in "addons-757656"
	I1018 08:30:16.346395   10741 out.go:179] * Verifying gcp-auth addon...
	I1018 08:30:16.350411   10741 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1018 08:30:16.352629   10741 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1018 08:30:16.352649   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:16.389027   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:16.426329   10741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:30:16.653410   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:16.708527   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:16.854205   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:16.888732   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1018 08:30:16.959586   10741 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:16.959615   10741 retry.go:31] will retry after 1.129108218s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:17.153489   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:17.208470   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:17.354556   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:17.389737   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:17.653794   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 08:30:17.695042   10741 node_ready.go:57] node "addons-757656" has "Ready":"False" status (will retry)
	I1018 08:30:17.709514   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:17.853034   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:17.889591   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:18.089864   10741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:30:18.154109   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:18.208572   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:18.353092   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:18.388887   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1018 08:30:18.620958   10741 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:18.620982   10741 retry.go:31] will retry after 3.766935063s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:18.653444   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:18.708698   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:18.853427   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:18.889931   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:19.153966   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:19.209019   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:19.353565   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:19.388984   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:19.654066   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:19.708565   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:19.854002   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:19.889243   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:20.154070   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 08:30:20.194673   10741 node_ready.go:57] node "addons-757656" has "Ready":"False" status (will retry)
	I1018 08:30:20.208131   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:20.353497   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:20.390103   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:20.653898   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:20.709097   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:20.853592   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:20.889252   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:21.153853   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:21.208791   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:21.353440   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:21.390019   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:21.654327   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:21.708520   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:21.853490   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:21.890043   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:22.153781   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 08:30:22.195130   10741 node_ready.go:57] node "addons-757656" has "Ready":"False" status (will retry)
	I1018 08:30:22.208802   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:22.353286   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:22.388427   10741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:30:22.389781   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:22.654272   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:22.708967   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:22.853630   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:22.889458   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1018 08:30:22.929469   10741 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:22.929503   10741 retry.go:31] will retry after 2.49806791s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:23.153158   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:23.209038   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:23.353683   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:23.389134   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:23.654551   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:23.708525   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:23.853914   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:23.889160   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:24.154086   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:24.208838   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:24.353319   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:24.389759   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:24.653284   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 08:30:24.694621   10741 node_ready.go:57] node "addons-757656" has "Ready":"False" status (will retry)
	I1018 08:30:24.708054   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:24.853666   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:24.889183   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:25.153913   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:25.208650   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:25.353731   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:25.389331   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:25.428600   10741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:30:25.654081   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:25.709420   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:25.853246   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:25.890032   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1018 08:30:25.966944   10741 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:25.966982   10741 retry.go:31] will retry after 7.110811732s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:26.153730   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:26.208475   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:26.352933   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:26.389458   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:26.653324   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 08:30:26.694681   10741 node_ready.go:57] node "addons-757656" has "Ready":"False" status (will retry)
	I1018 08:30:26.708438   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:26.853902   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:26.889141   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:27.153980   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:27.209063   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:27.353657   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:27.388938   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:27.654120   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:27.709126   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:27.853600   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:27.891655   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:28.153720   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:28.208592   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:28.352958   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:28.389488   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:28.653872   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 08:30:28.695270   10741 node_ready.go:57] node "addons-757656" has "Ready":"False" status (will retry)
	I1018 08:30:28.709275   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:28.853982   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:28.889582   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:29.153547   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:29.208547   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:29.354199   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:29.389697   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:29.653527   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:29.708092   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:29.853625   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:29.889008   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:30.153651   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:30.208468   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:30.354025   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:30.389582   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:30.653249   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:30.708400   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:30.853791   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:30.889104   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:31.153888   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 08:30:31.194244   10741 node_ready.go:57] node "addons-757656" has "Ready":"False" status (will retry)
	I1018 08:30:31.208759   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:31.353632   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:31.389113   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:31.654167   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:31.708515   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:31.854429   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:31.889768   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:32.153460   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:32.208378   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:32.353612   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:32.388995   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:32.653623   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:32.708717   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:32.853013   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:32.889578   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:33.078886   10741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:30:33.154130   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 08:30:33.194624   10741 node_ready.go:57] node "addons-757656" has "Ready":"False" status (will retry)
	I1018 08:30:33.208172   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:33.353802   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:33.389256   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1018 08:30:33.619120   10741 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:33.619146   10741 retry.go:31] will retry after 7.500285491s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:33.654093   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:33.708064   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:33.853440   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:33.889828   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:34.153618   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:34.208376   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:34.354091   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:34.389984   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:34.653691   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:34.708416   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:34.853998   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:34.889542   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:35.153216   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 08:30:35.194789   10741 node_ready.go:57] node "addons-757656" has "Ready":"False" status (will retry)
	I1018 08:30:35.208512   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:35.354051   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:35.389538   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:35.653407   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:35.708540   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:35.852890   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:35.889336   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:36.154010   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:36.208706   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:36.353261   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:36.389809   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:36.653656   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:36.708849   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:36.853334   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:36.889830   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:37.153701   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 08:30:37.199056   10741 node_ready.go:57] node "addons-757656" has "Ready":"False" status (will retry)
	I1018 08:30:37.208380   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:37.354115   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:37.389677   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:37.653236   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:37.708281   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:37.853835   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:37.889267   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:38.153158   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:38.208464   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:38.352953   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:38.389535   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:38.653476   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:38.708857   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:38.853416   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:38.890016   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:39.154005   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:39.208368   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:39.353824   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:39.389252   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:39.654425   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 08:30:39.694944   10741 node_ready.go:57] node "addons-757656" has "Ready":"False" status (will retry)
	I1018 08:30:39.708663   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:39.853119   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:39.889813   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:40.153834   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:40.209306   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:40.353789   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:40.389304   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:40.654195   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:40.708517   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:40.853242   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:40.889643   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:41.119991   10741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:30:41.154081   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:41.208451   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:41.353222   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:41.389944   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:41.654083   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 08:30:41.669670   10741 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:41.669706   10741 retry.go:31] will retry after 20.877241923s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1018 08:30:41.695168   10741 node_ready.go:57] node "addons-757656" has "Ready":"False" status (will retry)
	I1018 08:30:41.708895   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:41.854134   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:41.889997   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:42.154046   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:42.208208   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:42.354002   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:42.389517   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:42.653333   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:42.709117   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:42.853701   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:42.889265   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:43.154254   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:43.208469   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:43.353933   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:43.389407   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:43.654149   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:43.708515   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:43.853981   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:43.889447   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:44.153595   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 08:30:44.194936   10741 node_ready.go:57] node "addons-757656" has "Ready":"False" status (will retry)
	I1018 08:30:44.208478   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:44.353922   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:44.389530   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:44.653043   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:44.709373   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:44.853899   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:44.889500   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:45.154112   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:45.209110   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:45.353879   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:45.389304   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:45.654083   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:45.709210   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:45.853559   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:45.888864   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:46.153594   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 08:30:46.195038   10741 node_ready.go:57] node "addons-757656" has "Ready":"False" status (will retry)
	I1018 08:30:46.208820   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:46.354014   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:46.389476   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:46.653223   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:46.708267   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:46.853862   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:46.889374   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:47.154148   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:47.208557   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:47.353285   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:47.389645   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:47.653377   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:47.708611   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:47.853087   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:47.889758   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:48.153779   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 08:30:48.195134   10741 node_ready.go:57] node "addons-757656" has "Ready":"False" status (will retry)
	I1018 08:30:48.208949   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:48.353397   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:48.389836   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:48.653642   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:48.709168   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:48.853610   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:48.889101   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:49.154061   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:49.208574   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:49.354035   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:49.389420   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:49.654088   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:49.708970   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:49.852983   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:49.889183   10741 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1018 08:30:49.889213   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:50.156221   10741 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1018 08:30:50.156250   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:50.194415   10741 node_ready.go:49] node "addons-757656" is "Ready"
	I1018 08:30:50.194445   10741 node_ready.go:38] duration metric: took 41.502617212s for node "addons-757656" to be "Ready" ...
	I1018 08:30:50.194462   10741 api_server.go:52] waiting for apiserver process to appear ...
	I1018 08:30:50.194518   10741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 08:30:50.214645   10741 api_server.go:72] duration metric: took 42.152040104s to wait for apiserver process to appear ...
	I1018 08:30:50.214671   10741 api_server.go:88] waiting for apiserver healthz status ...
	I1018 08:30:50.214693   10741 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 08:30:50.223540   10741 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1018 08:30:50.224964   10741 api_server.go:141] control plane version: v1.34.1
	I1018 08:30:50.225048   10741 api_server.go:131] duration metric: took 10.368429ms to wait for apiserver health ...
	I1018 08:30:50.225074   10741 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 08:30:50.256938   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:50.258143   10741 system_pods.go:59] 20 kube-system pods found
	I1018 08:30:50.258186   10741 system_pods.go:61] "amd-gpu-device-plugin-v82lt" [24a1cd58-553e-4bee-beaf-75f0e39eeb29] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1018 08:30:50.258207   10741 system_pods.go:61] "coredns-66bc5c9577-jc8rc" [71b4dffb-1fac-41b5-a0d1-00bd35170863] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 08:30:50.258219   10741 system_pods.go:61] "csi-hostpath-attacher-0" [cd1b553c-f507-4a00-be3f-112646d6bac9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 08:30:50.258229   10741 system_pods.go:61] "csi-hostpath-resizer-0" [33e2db37-d8fa-46ef-9e57-c95189e22be9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 08:30:50.258238   10741 system_pods.go:61] "csi-hostpathplugin-cdc5c" [2f6ff4c2-5cba-44ce-8782-9c83b09037d1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 08:30:50.258258   10741 system_pods.go:61] "etcd-addons-757656" [a652fa9c-0b57-403a-988b-2523ea85d6a1] Running
	I1018 08:30:50.258263   10741 system_pods.go:61] "kindnet-tdxms" [cbe08d6f-c4c2-4fea-a63d-61727b12b409] Running
	I1018 08:30:50.258268   10741 system_pods.go:61] "kube-apiserver-addons-757656" [97d192e7-983b-42e7-819c-f8f37eb0e4c1] Running
	I1018 08:30:50.258273   10741 system_pods.go:61] "kube-controller-manager-addons-757656" [7095119c-78e7-4f0d-9d7d-7163b936265f] Running
	I1018 08:30:50.258281   10741 system_pods.go:61] "kube-ingress-dns-minikube" [03e9ac2e-0063-4caf-a121-5001352c7116] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 08:30:50.258286   10741 system_pods.go:61] "kube-proxy-gw6hz" [ab15e9e6-a4b9-435f-b1e3-edb00fd5bac3] Running
	I1018 08:30:50.258293   10741 system_pods.go:61] "kube-scheduler-addons-757656" [d0cd2940-7f92-4160-8c18-6e43e28c485a] Running
	I1018 08:30:50.258301   10741 system_pods.go:61] "metrics-server-85b7d694d7-vl9c2" [662d002a-23b4-4f7f-a0bc-3a0813819aa2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 08:30:50.258310   10741 system_pods.go:61] "nvidia-device-plugin-daemonset-bnzlc" [6bc7ead8-1876-4ced-8f8f-ca3c9e987a0e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 08:30:50.258318   10741 system_pods.go:61] "registry-6b586f9694-lbbgc" [f8fb4269-2c69-4e40-ac5a-1a96111c3b97] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 08:30:50.258334   10741 system_pods.go:61] "registry-creds-764b6fb674-h7xh9" [ef91ed70-4df9-4356-8c81-61877956a49a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 08:30:50.258366   10741 system_pods.go:61] "registry-proxy-7g848" [66b8f065-f87f-49bb-a6ff-d7474f5b093b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 08:30:50.258375   10741 system_pods.go:61] "snapshot-controller-7d9fbc56b8-7zt7h" [dce8dc99-b0dc-4597-a45a-706a104f0579] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 08:30:50.258384   10741 system_pods.go:61] "snapshot-controller-7d9fbc56b8-nbz64" [3c9a5a42-a8e5-40b0-bea3-3e38e6f5ec23] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 08:30:50.258391   10741 system_pods.go:61] "storage-provisioner" [814600d1-fc8c-417d-8d70-98d698fd7d63] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 08:30:50.258400   10741 system_pods.go:74] duration metric: took 33.308827ms to wait for pod list to return data ...
	I1018 08:30:50.258412   10741 default_sa.go:34] waiting for default service account to be created ...
	I1018 08:30:50.261058   10741 default_sa.go:45] found service account: "default"
	I1018 08:30:50.261078   10741 default_sa.go:55] duration metric: took 2.659425ms for default service account to be created ...
	I1018 08:30:50.261088   10741 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 08:30:50.264095   10741 system_pods.go:86] 20 kube-system pods found
	I1018 08:30:50.264123   10741 system_pods.go:89] "amd-gpu-device-plugin-v82lt" [24a1cd58-553e-4bee-beaf-75f0e39eeb29] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1018 08:30:50.264130   10741 system_pods.go:89] "coredns-66bc5c9577-jc8rc" [71b4dffb-1fac-41b5-a0d1-00bd35170863] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 08:30:50.264137   10741 system_pods.go:89] "csi-hostpath-attacher-0" [cd1b553c-f507-4a00-be3f-112646d6bac9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 08:30:50.264144   10741 system_pods.go:89] "csi-hostpath-resizer-0" [33e2db37-d8fa-46ef-9e57-c95189e22be9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 08:30:50.264150   10741 system_pods.go:89] "csi-hostpathplugin-cdc5c" [2f6ff4c2-5cba-44ce-8782-9c83b09037d1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 08:30:50.264159   10741 system_pods.go:89] "etcd-addons-757656" [a652fa9c-0b57-403a-988b-2523ea85d6a1] Running
	I1018 08:30:50.264163   10741 system_pods.go:89] "kindnet-tdxms" [cbe08d6f-c4c2-4fea-a63d-61727b12b409] Running
	I1018 08:30:50.264166   10741 system_pods.go:89] "kube-apiserver-addons-757656" [97d192e7-983b-42e7-819c-f8f37eb0e4c1] Running
	I1018 08:30:50.264170   10741 system_pods.go:89] "kube-controller-manager-addons-757656" [7095119c-78e7-4f0d-9d7d-7163b936265f] Running
	I1018 08:30:50.264175   10741 system_pods.go:89] "kube-ingress-dns-minikube" [03e9ac2e-0063-4caf-a121-5001352c7116] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 08:30:50.264183   10741 system_pods.go:89] "kube-proxy-gw6hz" [ab15e9e6-a4b9-435f-b1e3-edb00fd5bac3] Running
	I1018 08:30:50.264188   10741 system_pods.go:89] "kube-scheduler-addons-757656" [d0cd2940-7f92-4160-8c18-6e43e28c485a] Running
	I1018 08:30:50.264194   10741 system_pods.go:89] "metrics-server-85b7d694d7-vl9c2" [662d002a-23b4-4f7f-a0bc-3a0813819aa2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 08:30:50.264203   10741 system_pods.go:89] "nvidia-device-plugin-daemonset-bnzlc" [6bc7ead8-1876-4ced-8f8f-ca3c9e987a0e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 08:30:50.264209   10741 system_pods.go:89] "registry-6b586f9694-lbbgc" [f8fb4269-2c69-4e40-ac5a-1a96111c3b97] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 08:30:50.264216   10741 system_pods.go:89] "registry-creds-764b6fb674-h7xh9" [ef91ed70-4df9-4356-8c81-61877956a49a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 08:30:50.264223   10741 system_pods.go:89] "registry-proxy-7g848" [66b8f065-f87f-49bb-a6ff-d7474f5b093b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 08:30:50.264228   10741 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7zt7h" [dce8dc99-b0dc-4597-a45a-706a104f0579] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 08:30:50.264233   10741 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nbz64" [3c9a5a42-a8e5-40b0-bea3-3e38e6f5ec23] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 08:30:50.264239   10741 system_pods.go:89] "storage-provisioner" [814600d1-fc8c-417d-8d70-98d698fd7d63] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 08:30:50.264252   10741 retry.go:31] will retry after 242.093194ms: missing components: kube-dns
	I1018 08:30:50.353662   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:50.389579   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:50.511058   10741 system_pods.go:86] 20 kube-system pods found
	I1018 08:30:50.511096   10741 system_pods.go:89] "amd-gpu-device-plugin-v82lt" [24a1cd58-553e-4bee-beaf-75f0e39eeb29] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1018 08:30:50.511108   10741 system_pods.go:89] "coredns-66bc5c9577-jc8rc" [71b4dffb-1fac-41b5-a0d1-00bd35170863] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 08:30:50.511119   10741 system_pods.go:89] "csi-hostpath-attacher-0" [cd1b553c-f507-4a00-be3f-112646d6bac9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 08:30:50.511130   10741 system_pods.go:89] "csi-hostpath-resizer-0" [33e2db37-d8fa-46ef-9e57-c95189e22be9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 08:30:50.511158   10741 system_pods.go:89] "csi-hostpathplugin-cdc5c" [2f6ff4c2-5cba-44ce-8782-9c83b09037d1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 08:30:50.511170   10741 system_pods.go:89] "etcd-addons-757656" [a652fa9c-0b57-403a-988b-2523ea85d6a1] Running
	I1018 08:30:50.511182   10741 system_pods.go:89] "kindnet-tdxms" [cbe08d6f-c4c2-4fea-a63d-61727b12b409] Running
	I1018 08:30:50.511191   10741 system_pods.go:89] "kube-apiserver-addons-757656" [97d192e7-983b-42e7-819c-f8f37eb0e4c1] Running
	I1018 08:30:50.511197   10741 system_pods.go:89] "kube-controller-manager-addons-757656" [7095119c-78e7-4f0d-9d7d-7163b936265f] Running
	I1018 08:30:50.511210   10741 system_pods.go:89] "kube-ingress-dns-minikube" [03e9ac2e-0063-4caf-a121-5001352c7116] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 08:30:50.511217   10741 system_pods.go:89] "kube-proxy-gw6hz" [ab15e9e6-a4b9-435f-b1e3-edb00fd5bac3] Running
	I1018 08:30:50.511223   10741 system_pods.go:89] "kube-scheduler-addons-757656" [d0cd2940-7f92-4160-8c18-6e43e28c485a] Running
	I1018 08:30:50.511234   10741 system_pods.go:89] "metrics-server-85b7d694d7-vl9c2" [662d002a-23b4-4f7f-a0bc-3a0813819aa2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 08:30:50.511245   10741 system_pods.go:89] "nvidia-device-plugin-daemonset-bnzlc" [6bc7ead8-1876-4ced-8f8f-ca3c9e987a0e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 08:30:50.511258   10741 system_pods.go:89] "registry-6b586f9694-lbbgc" [f8fb4269-2c69-4e40-ac5a-1a96111c3b97] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 08:30:50.511269   10741 system_pods.go:89] "registry-creds-764b6fb674-h7xh9" [ef91ed70-4df9-4356-8c81-61877956a49a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 08:30:50.511280   10741 system_pods.go:89] "registry-proxy-7g848" [66b8f065-f87f-49bb-a6ff-d7474f5b093b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 08:30:50.511291   10741 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7zt7h" [dce8dc99-b0dc-4597-a45a-706a104f0579] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 08:30:50.511310   10741 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nbz64" [3c9a5a42-a8e5-40b0-bea3-3e38e6f5ec23] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 08:30:50.511318   10741 system_pods.go:89] "storage-provisioner" [814600d1-fc8c-417d-8d70-98d698fd7d63] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 08:30:50.511357   10741 retry.go:31] will retry after 350.295896ms: missing components: kube-dns
	I1018 08:30:50.654938   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:50.708994   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:50.853922   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:50.866446   10741 system_pods.go:86] 20 kube-system pods found
	I1018 08:30:50.866482   10741 system_pods.go:89] "amd-gpu-device-plugin-v82lt" [24a1cd58-553e-4bee-beaf-75f0e39eeb29] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1018 08:30:50.866491   10741 system_pods.go:89] "coredns-66bc5c9577-jc8rc" [71b4dffb-1fac-41b5-a0d1-00bd35170863] Running
	I1018 08:30:50.866504   10741 system_pods.go:89] "csi-hostpath-attacher-0" [cd1b553c-f507-4a00-be3f-112646d6bac9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 08:30:50.866514   10741 system_pods.go:89] "csi-hostpath-resizer-0" [33e2db37-d8fa-46ef-9e57-c95189e22be9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 08:30:50.866525   10741 system_pods.go:89] "csi-hostpathplugin-cdc5c" [2f6ff4c2-5cba-44ce-8782-9c83b09037d1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 08:30:50.866535   10741 system_pods.go:89] "etcd-addons-757656" [a652fa9c-0b57-403a-988b-2523ea85d6a1] Running
	I1018 08:30:50.866542   10741 system_pods.go:89] "kindnet-tdxms" [cbe08d6f-c4c2-4fea-a63d-61727b12b409] Running
	I1018 08:30:50.866551   10741 system_pods.go:89] "kube-apiserver-addons-757656" [97d192e7-983b-42e7-819c-f8f37eb0e4c1] Running
	I1018 08:30:50.866557   10741 system_pods.go:89] "kube-controller-manager-addons-757656" [7095119c-78e7-4f0d-9d7d-7163b936265f] Running
	I1018 08:30:50.866567   10741 system_pods.go:89] "kube-ingress-dns-minikube" [03e9ac2e-0063-4caf-a121-5001352c7116] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 08:30:50.866582   10741 system_pods.go:89] "kube-proxy-gw6hz" [ab15e9e6-a4b9-435f-b1e3-edb00fd5bac3] Running
	I1018 08:30:50.866591   10741 system_pods.go:89] "kube-scheduler-addons-757656" [d0cd2940-7f92-4160-8c18-6e43e28c485a] Running
	I1018 08:30:50.866599   10741 system_pods.go:89] "metrics-server-85b7d694d7-vl9c2" [662d002a-23b4-4f7f-a0bc-3a0813819aa2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 08:30:50.866612   10741 system_pods.go:89] "nvidia-device-plugin-daemonset-bnzlc" [6bc7ead8-1876-4ced-8f8f-ca3c9e987a0e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 08:30:50.866623   10741 system_pods.go:89] "registry-6b586f9694-lbbgc" [f8fb4269-2c69-4e40-ac5a-1a96111c3b97] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 08:30:50.866631   10741 system_pods.go:89] "registry-creds-764b6fb674-h7xh9" [ef91ed70-4df9-4356-8c81-61877956a49a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 08:30:50.866641   10741 system_pods.go:89] "registry-proxy-7g848" [66b8f065-f87f-49bb-a6ff-d7474f5b093b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 08:30:50.866650   10741 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7zt7h" [dce8dc99-b0dc-4597-a45a-706a104f0579] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 08:30:50.866662   10741 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nbz64" [3c9a5a42-a8e5-40b0-bea3-3e38e6f5ec23] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 08:30:50.866667   10741 system_pods.go:89] "storage-provisioner" [814600d1-fc8c-417d-8d70-98d698fd7d63] Running
	I1018 08:30:50.866679   10741 system_pods.go:126] duration metric: took 605.584736ms to wait for k8s-apps to be running ...
	I1018 08:30:50.866692   10741 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 08:30:50.866750   10741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 08:30:50.884428   10741 system_svc.go:56] duration metric: took 17.72595ms WaitForService to wait for kubelet
	I1018 08:30:50.884468   10741 kubeadm.go:586] duration metric: took 42.821868966s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 08:30:50.884491   10741 node_conditions.go:102] verifying NodePressure condition ...
	I1018 08:30:50.887813   10741 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 08:30:50.887845   10741 node_conditions.go:123] node cpu capacity is 8
	I1018 08:30:50.887864   10741 node_conditions.go:105] duration metric: took 3.367416ms to run NodePressure ...
	I1018 08:30:50.887879   10741 start.go:241] waiting for startup goroutines ...
	I1018 08:30:50.889086   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:51.154207   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:51.209144   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:51.353998   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:51.389564   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:51.654059   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:51.708736   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:51.854310   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:51.890183   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:52.154622   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:52.209294   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:52.354716   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:52.389605   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:52.653981   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:52.754619   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:52.853433   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:52.890610   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:53.153818   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:53.208692   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:53.353486   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:53.390208   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:53.654419   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:53.709150   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:53.853766   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:53.889548   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:54.153621   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:54.208698   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:54.353515   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:54.390475   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:54.654778   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:54.708770   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:54.853592   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:54.889062   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:55.153835   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:55.208296   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:55.354005   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:55.389465   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:55.655240   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:55.709293   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:55.854754   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:55.955138   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:56.154120   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:56.208836   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:56.353890   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:56.389605   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:56.656261   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:56.711096   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:56.854454   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:56.891124   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:57.154628   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:57.209555   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:57.354501   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:57.389413   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:57.655262   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:57.709389   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:57.854326   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:57.890281   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:58.154938   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:58.208651   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:58.353861   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:58.389989   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:58.655104   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:58.708955   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:58.853840   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:58.889957   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:59.154485   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:59.209056   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:59.353931   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:59.389879   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:59.655463   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:59.709267   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:59.853904   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:59.889698   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:00.154402   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:00.209457   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:00.354107   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:00.390009   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:00.654242   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:00.709045   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:00.854002   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:00.910891   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:01.154358   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:01.209321   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:01.355600   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:01.390926   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:01.654050   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:01.709136   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:01.854680   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:01.889644   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:02.154162   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:02.254872   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:02.354597   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:02.389531   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:02.547579   10741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:31:02.654161   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:02.709471   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:02.854377   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:02.890011   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:03.154625   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:03.209619   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 08:31:03.267712   10741 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:31:03.267750   10741 retry.go:31] will retry after 23.096835382s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:31:03.354015   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:03.390686   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:03.653574   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:03.709960   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:03.853701   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:03.889697   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:04.154484   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:04.209326   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:04.354655   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:04.389757   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:04.656727   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:04.765300   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:04.867594   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:04.965942   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:05.153727   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:05.208190   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:05.353620   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:05.389276   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:05.660733   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:05.711073   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:05.854250   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:05.890289   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:06.155022   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:06.208913   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:06.354220   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:06.390138   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:06.654333   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:06.709422   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:06.854133   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:06.889728   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:07.153663   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:07.208195   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:07.354145   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:07.389864   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:07.654919   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:07.709778   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:07.853655   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:07.889148   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:08.155140   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:08.208873   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:08.354458   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:08.390320   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:08.654681   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:08.709910   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:08.853248   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:08.889927   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:09.154009   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:09.209037   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:09.353797   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:09.389660   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:09.654114   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:09.708502   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:09.853936   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:09.889613   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:10.153788   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:10.208098   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:10.354032   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:10.389750   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:10.654192   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:10.709408   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:10.853784   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:10.889219   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:11.155387   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:11.210155   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:11.355188   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:11.391288   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:11.656180   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:11.710892   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:11.854256   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:11.890121   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:12.155187   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:12.209387   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:12.355185   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:12.389848   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:12.654251   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:12.709273   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:12.854011   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:12.890138   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:13.155234   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:13.208841   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:13.353797   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:13.389643   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:13.654293   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:13.709407   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:13.854147   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:13.890156   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:14.154645   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:14.209667   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:14.353414   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:14.390724   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:14.654850   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:14.708766   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:14.853590   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:14.889707   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:15.153919   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:15.208859   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:15.353892   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:15.389952   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:15.654439   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:15.709192   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:15.853717   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:15.889829   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:16.153391   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:16.209242   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:16.354120   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:16.391096   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:16.654591   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:16.754599   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:16.853723   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:16.889770   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:17.154066   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:17.208890   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:17.353793   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:17.389705   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:17.653872   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:17.754425   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:17.854290   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:17.889983   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:18.154510   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:18.208992   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:18.353629   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:18.389244   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:18.654710   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:18.713227   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:18.853872   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:18.889768   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:19.153958   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:19.208704   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:19.353223   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:19.389708   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:19.654523   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:19.708836   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:19.853635   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:19.889120   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:20.154821   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:20.208678   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:20.359605   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:20.389062   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:20.654832   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:20.708481   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:20.854403   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:20.955739   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:21.153885   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:21.209783   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:21.354495   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:21.391834   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:21.655769   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:21.716642   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:21.855646   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:21.890486   10741 kapi.go:107] duration metric: took 1m12.50407048s to wait for kubernetes.io/minikube-addons=registry ...
	I1018 08:31:22.153933   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:22.208827   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:22.353510   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:22.653999   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:22.709127   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:22.853856   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:23.154655   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:23.209370   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:23.353996   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:23.654643   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:23.755790   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:23.855942   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:24.154257   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:24.209068   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:24.353673   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:24.726824   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:24.726852   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:24.853674   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:25.154436   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:25.209464   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:25.354053   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:25.654527   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:25.741221   10741 kapi.go:107] duration metric: took 1m16.035730152s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1018 08:31:25.854010   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:26.154145   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:26.354440   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:26.365534   10741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:31:26.655489   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:26.853657   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:27.154183   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 08:31:27.169202   10741 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:31:27.169233   10741 retry.go:31] will retry after 19.645870036s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:31:27.354211   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:27.654389   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:27.853861   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:28.154312   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:28.354319   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:28.654515   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:28.853811   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:29.154121   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:29.354064   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:29.654585   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:29.853826   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:30.154886   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:30.354096   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:30.654196   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:30.854583   10741 kapi.go:107] duration metric: took 1m14.504170603s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1018 08:31:30.856876   10741 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-757656 cluster.
	I1018 08:31:30.858375   10741 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1018 08:31:30.859619   10741 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1018 08:31:31.153910   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:31.654469   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:32.154397   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:32.654362   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:33.154007   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:33.655335   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:34.153594   10741 kapi.go:107] duration metric: took 1m24.003027194s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1018 08:31:46.817252   10741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1018 08:31:47.354361   10741 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1018 08:31:47.354480   10741 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1018 08:31:47.356291   10741 out.go:179] * Enabled addons: registry-creds, storage-provisioner, nvidia-device-plugin, amd-gpu-device-plugin, cloud-spanner, storage-provisioner-rancher, metrics-server, yakd, ingress-dns, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1018 08:31:47.358050   10741 addons.go:514] duration metric: took 1m39.295411056s for enable addons: enabled=[registry-creds storage-provisioner nvidia-device-plugin amd-gpu-device-plugin cloud-spanner storage-provisioner-rancher metrics-server yakd ingress-dns volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1018 08:31:47.358091   10741 start.go:246] waiting for cluster config update ...
	I1018 08:31:47.358108   10741 start.go:255] writing updated cluster config ...
	I1018 08:31:47.358390   10741 ssh_runner.go:195] Run: rm -f paused
	I1018 08:31:47.362481   10741 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 08:31:47.366191   10741 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jc8rc" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:31:47.370452   10741 pod_ready.go:94] pod "coredns-66bc5c9577-jc8rc" is "Ready"
	I1018 08:31:47.370480   10741 pod_ready.go:86] duration metric: took 4.266473ms for pod "coredns-66bc5c9577-jc8rc" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:31:47.372337   10741 pod_ready.go:83] waiting for pod "etcd-addons-757656" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:31:47.376577   10741 pod_ready.go:94] pod "etcd-addons-757656" is "Ready"
	I1018 08:31:47.376600   10741 pod_ready.go:86] duration metric: took 4.220256ms for pod "etcd-addons-757656" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:31:47.378896   10741 pod_ready.go:83] waiting for pod "kube-apiserver-addons-757656" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:31:47.384306   10741 pod_ready.go:94] pod "kube-apiserver-addons-757656" is "Ready"
	I1018 08:31:47.384337   10741 pod_ready.go:86] duration metric: took 5.420435ms for pod "kube-apiserver-addons-757656" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:31:47.386423   10741 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-757656" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:31:47.766295   10741 pod_ready.go:94] pod "kube-controller-manager-addons-757656" is "Ready"
	I1018 08:31:47.766326   10741 pod_ready.go:86] duration metric: took 379.881197ms for pod "kube-controller-manager-addons-757656" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:31:47.966387   10741 pod_ready.go:83] waiting for pod "kube-proxy-gw6hz" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:31:48.366197   10741 pod_ready.go:94] pod "kube-proxy-gw6hz" is "Ready"
	I1018 08:31:48.366224   10741 pod_ready.go:86] duration metric: took 399.813712ms for pod "kube-proxy-gw6hz" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:31:48.566378   10741 pod_ready.go:83] waiting for pod "kube-scheduler-addons-757656" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:31:48.966382   10741 pod_ready.go:94] pod "kube-scheduler-addons-757656" is "Ready"
	I1018 08:31:48.966410   10741 pod_ready.go:86] duration metric: took 400.005699ms for pod "kube-scheduler-addons-757656" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:31:48.966421   10741 pod_ready.go:40] duration metric: took 1.603909567s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 08:31:49.010561   10741 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 08:31:49.012649   10741 out.go:179] * Done! kubectl is now configured to use "addons-757656" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 08:33:04 addons-757656 crio[783]: time="2025-10-18T08:33:04.450709302Z" level=info msg="Pulling image: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=e9cf5a3c-8c78-4305-b7be-4654ad8f06c8 name=/runtime.v1.ImageService/PullImage
	Oct 18 08:33:04 addons-757656 crio[783]: time="2025-10-18T08:33:04.452423464Z" level=info msg="Trying to access \"docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605\""
	Oct 18 08:33:05 addons-757656 crio[783]: time="2025-10-18T08:33:05.505809922Z" level=info msg="Pulled image: docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=e9cf5a3c-8c78-4305-b7be-4654ad8f06c8 name=/runtime.v1.ImageService/PullImage
	Oct 18 08:33:05 addons-757656 crio[783]: time="2025-10-18T08:33:05.506455725Z" level=info msg="Checking image status: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=80691deb-e502-4233-9c6f-7186a7e733ce name=/runtime.v1.ImageService/ImageStatus
	Oct 18 08:33:05 addons-757656 crio[783]: time="2025-10-18T08:33:05.541249392Z" level=info msg="Checking image status: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=42bd22ce-4bdc-4b35-aa3d-0c61be071042 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 08:33:05 addons-757656 crio[783]: time="2025-10-18T08:33:05.545019072Z" level=info msg="Creating container: kube-system/registry-creds-764b6fb674-h7xh9/registry-creds" id=3aeafd1e-4a5e-4eec-911f-48ed4e5c9965 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 08:33:05 addons-757656 crio[783]: time="2025-10-18T08:33:05.545937386Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 08:33:05 addons-757656 crio[783]: time="2025-10-18T08:33:05.551247267Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 08:33:05 addons-757656 crio[783]: time="2025-10-18T08:33:05.551744563Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 08:33:05 addons-757656 crio[783]: time="2025-10-18T08:33:05.587726645Z" level=info msg="Created container c0ec2a9d24f81585da250d2c27c5190e0aae8b9cf02a4d28db3b0e890709a6ff: kube-system/registry-creds-764b6fb674-h7xh9/registry-creds" id=3aeafd1e-4a5e-4eec-911f-48ed4e5c9965 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 08:33:05 addons-757656 crio[783]: time="2025-10-18T08:33:05.58839809Z" level=info msg="Starting container: c0ec2a9d24f81585da250d2c27c5190e0aae8b9cf02a4d28db3b0e890709a6ff" id=7b2a1ad0-ee11-44b1-b19e-cd00a2ba8b79 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 08:33:05 addons-757656 crio[783]: time="2025-10-18T08:33:05.590227965Z" level=info msg="Started container" PID=9030 containerID=c0ec2a9d24f81585da250d2c27c5190e0aae8b9cf02a4d28db3b0e890709a6ff description=kube-system/registry-creds-764b6fb674-h7xh9/registry-creds id=7b2a1ad0-ee11-44b1-b19e-cd00a2ba8b79 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2372ea21f3ed81b3c6bd20c60cf87ecfddb182c7bc2ac13f7f4c45709db37207
	Oct 18 08:34:34 addons-757656 crio[783]: time="2025-10-18T08:34:34.146016202Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-9r88h/POD" id=7321ee67-9622-414e-823b-3cce602a6523 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 08:34:34 addons-757656 crio[783]: time="2025-10-18T08:34:34.146119343Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 08:34:34 addons-757656 crio[783]: time="2025-10-18T08:34:34.151977016Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-9r88h Namespace:default ID:4ce344df51fc7621e3680bcb4194bf36f63b9d07c72ab9a88bf9f9001dd1672a UID:ab7d6333-1491-4a5d-9072-b73c3dfd729c NetNS:/var/run/netns/da4b53cc-c2ba-40c0-8885-06111009b45a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000880f78}] Aliases:map[]}"
	Oct 18 08:34:34 addons-757656 crio[783]: time="2025-10-18T08:34:34.152006692Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-9r88h to CNI network \"kindnet\" (type=ptp)"
	Oct 18 08:34:34 addons-757656 crio[783]: time="2025-10-18T08:34:34.161233736Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-9r88h Namespace:default ID:4ce344df51fc7621e3680bcb4194bf36f63b9d07c72ab9a88bf9f9001dd1672a UID:ab7d6333-1491-4a5d-9072-b73c3dfd729c NetNS:/var/run/netns/da4b53cc-c2ba-40c0-8885-06111009b45a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000880f78}] Aliases:map[]}"
	Oct 18 08:34:34 addons-757656 crio[783]: time="2025-10-18T08:34:34.161366451Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-9r88h for CNI network kindnet (type=ptp)"
	Oct 18 08:34:34 addons-757656 crio[783]: time="2025-10-18T08:34:34.162295315Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 18 08:34:34 addons-757656 crio[783]: time="2025-10-18T08:34:34.163453826Z" level=info msg="Ran pod sandbox 4ce344df51fc7621e3680bcb4194bf36f63b9d07c72ab9a88bf9f9001dd1672a with infra container: default/hello-world-app-5d498dc89-9r88h/POD" id=7321ee67-9622-414e-823b-3cce602a6523 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 08:34:34 addons-757656 crio[783]: time="2025-10-18T08:34:34.164686416Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=3aea2706-5206-4659-bd23-0fd5ac32acd3 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 08:34:34 addons-757656 crio[783]: time="2025-10-18T08:34:34.164792415Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=3aea2706-5206-4659-bd23-0fd5ac32acd3 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 08:34:34 addons-757656 crio[783]: time="2025-10-18T08:34:34.16482702Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=3aea2706-5206-4659-bd23-0fd5ac32acd3 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 08:34:34 addons-757656 crio[783]: time="2025-10-18T08:34:34.165472005Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=529c4939-9694-418c-ba59-18833616b880 name=/runtime.v1.ImageService/PullImage
	Oct 18 08:34:34 addons-757656 crio[783]: time="2025-10-18T08:34:34.183283584Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	c0ec2a9d24f81       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             About a minute ago   Running             registry-creds                           0                   2372ea21f3ed8       registry-creds-764b6fb674-h7xh9             kube-system
	00ec79126c8f5       docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e                                              2 minutes ago        Running             nginx                                    0                   5a68b2ac718d2       nginx                                       default
	f533f115801c8       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago        Running             busybox                                  0                   1aa2cceba9c59       busybox                                     default
	4790d2a16058f       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          3 minutes ago        Running             csi-snapshotter                          0                   28f2be34559e5       csi-hostpathplugin-cdc5c                    kube-system
	48cc0cbf614ba       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          3 minutes ago        Running             csi-provisioner                          0                   28f2be34559e5       csi-hostpathplugin-cdc5c                    kube-system
	c8379247a51b8       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            3 minutes ago        Running             liveness-probe                           0                   28f2be34559e5       csi-hostpathplugin-cdc5c                    kube-system
	941a165621387       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           3 minutes ago        Running             hostpath                                 0                   28f2be34559e5       csi-hostpathplugin-cdc5c                    kube-system
	0541dfc3f0d13       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 3 minutes ago        Running             gcp-auth                                 0                   8185b4418ee67       gcp-auth-78565c9fb4-z25bv                   gcp-auth
	a4ce05cad528b       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb                            3 minutes ago        Running             gadget                                   0                   2e64eeb086c88       gadget-km4ch                                gadget
	14afe1f1816da       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                3 minutes ago        Running             node-driver-registrar                    0                   28f2be34559e5       csi-hostpathplugin-cdc5c                    kube-system
	8f9e1307c9d0a       registry.k8s.io/ingress-nginx/controller@sha256:7b4073fc95e078d863c0b0b08deb72e01d2cf629e2156822bcd394fc2bcd8e83                             3 minutes ago        Running             controller                               0                   c9b5d5846a39a       ingress-nginx-controller-675c5ddd98-9bx4w   ingress-nginx
	440454c3da4c0       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              3 minutes ago        Running             registry-proxy                           0                   851baf9bee8f9       registry-proxy-7g848                        kube-system
	bbb9cb4aa33f4       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago        Running             csi-resizer                              0                   66f0e5ae8783f       csi-hostpath-resizer-0                      kube-system
	adf0e53cd5b4b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   3 minutes ago        Exited              patch                                    0                   3a632b124a208       ingress-nginx-admission-patch-4jmq4         ingress-nginx
	0ad5891c05dff       nvcr.io/nvidia/k8s-device-plugin@sha256:ad155f1089b64673c75b2f39258f0791cbad6d3011419726ec605196981e1c32                                     3 minutes ago        Running             nvidia-device-plugin-ctr                 0                   97a30391685b9       nvidia-device-plugin-daemonset-bnzlc        kube-system
	f6e03a69b7bc4       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     3 minutes ago        Running             amd-gpu-device-plugin                    0                   247281eda0cae       amd-gpu-device-plugin-v82lt                 kube-system
	4171876174cfa       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   3 minutes ago        Running             csi-external-health-monitor-controller   0                   28f2be34559e5       csi-hostpathplugin-cdc5c                    kube-system
	813e46f6ecd6f       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago        Running             volume-snapshot-controller               0                   b3c8ad4e98913       snapshot-controller-7d9fbc56b8-7zt7h        kube-system
	4eb2f9d88de24       gcr.io/cloud-spanner-emulator/emulator@sha256:66030f526b1bc41f0d2027b496fd8fa53f620bf9d5a18baa07990e67f1a20237                               3 minutes ago        Running             cloud-spanner-emulator                   0                   c07351a604a5b       cloud-spanner-emulator-86bd5cbb97-x5chs     default
	7b9675803326b       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              3 minutes ago        Running             yakd                                     0                   c15a4e7496003       yakd-dashboard-5ff678cb9-v82m4              yakd-dashboard
	ca11cfbc41b51       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   3 minutes ago        Exited              create                                   0                   e921d9185fb60       ingress-nginx-admission-create-s2qbg        ingress-nginx
	ab63780aacfa0       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago        Running             minikube-ingress-dns                     0                   176cb0a01af6b       kube-ingress-dns-minikube                   kube-system
	42811c9c9f88b       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago        Running             local-path-provisioner                   0                   2209bdb6af913       local-path-provisioner-648f6765c9-f8n49     local-path-storage
	72deecf66bdbe       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago        Running             csi-attacher                             0                   2832ed0ad5d31       csi-hostpath-attacher-0                     kube-system
	cd1b8704f38dd       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago        Running             volume-snapshot-controller               0                   b28fa2823e462       snapshot-controller-7d9fbc56b8-nbz64        kube-system
	c08a0e9528c61       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           3 minutes ago        Running             registry                                 0                   52283bc5a5a2f       registry-6b586f9694-lbbgc                   kube-system
	c216c132bff88       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        3 minutes ago        Running             metrics-server                           0                   e09a9bd5b3438       metrics-server-85b7d694d7-vl9c2             kube-system
	dbf5c6e8579fb       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             3 minutes ago        Running             coredns                                  0                   668e1c33dd1aa       coredns-66bc5c9577-jc8rc                    kube-system
	7189be801872d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             3 minutes ago        Running             storage-provisioner                      0                   fe884349ac819       storage-provisioner                         kube-system
	6c971e87dacce       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             4 minutes ago        Running             kube-proxy                               0                   d8fa2f4be30c4       kube-proxy-gw6hz                            kube-system
	1511480aef50d       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             4 minutes ago        Running             kindnet-cni                              0                   54168cb719e6a       kindnet-tdxms                               kube-system
	52adc977887b4       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             4 minutes ago        Running             kube-controller-manager                  0                   f135fab3e61a1       kube-controller-manager-addons-757656       kube-system
	4faa6d23dba1b       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             4 minutes ago        Running             kube-apiserver                           0                   bddbd7fc6f4aa       kube-apiserver-addons-757656                kube-system
	717994737c9e9       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             4 minutes ago        Running             kube-scheduler                           0                   6c3b44c81dc7d       kube-scheduler-addons-757656                kube-system
	56d69d63fccc1       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             4 minutes ago        Running             etcd                                     0                   9765f88d8612b       etcd-addons-757656                          kube-system
	
	
	==> coredns [dbf5c6e8579fb377ddb314d3b43db1406eed33b898a34570445f3dbda1c63266] <==
	[INFO] 10.244.0.22:42546 - 16831 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.009023751s
	[INFO] 10.244.0.22:52707 - 9375 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004641527s
	[INFO] 10.244.0.22:36457 - 16383 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005984458s
	[INFO] 10.244.0.22:55464 - 29157 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004662359s
	[INFO] 10.244.0.22:34601 - 19730 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.00521184s
	[INFO] 10.244.0.22:44135 - 11561 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001151193s
	[INFO] 10.244.0.22:60266 - 50084 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002491243s
	[INFO] 10.244.0.24:35677 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000225084s
	[INFO] 10.244.0.24:42565 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000139078s
	[INFO] 10.244.0.31:48005 - 46795 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000232294s
	[INFO] 10.244.0.31:56840 - 38940 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000294573s
	[INFO] 10.244.0.31:59227 - 18104 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000142823s
	[INFO] 10.244.0.31:60955 - 32411 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000204294s
	[INFO] 10.244.0.31:40115 - 46540 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000109709s
	[INFO] 10.244.0.31:48242 - 6782 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000142802s
	[INFO] 10.244.0.31:36570 - 24884 "A IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.00610052s
	[INFO] 10.244.0.31:33113 - 60985 "AAAA IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.006318535s
	[INFO] 10.244.0.31:55924 - 14885 "A IN accounts.google.com.us-central1-a.c.k8s-minikube.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 185 0.006273488s
	[INFO] 10.244.0.31:59939 - 42935 "AAAA IN accounts.google.com.us-central1-a.c.k8s-minikube.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 185 0.007316298s
	[INFO] 10.244.0.31:46683 - 4566 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.004638011s
	[INFO] 10.244.0.31:45850 - 7620 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.005091817s
	[INFO] 10.244.0.31:35976 - 35878 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.004112087s
	[INFO] 10.244.0.31:52528 - 60378 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.005228908s
	[INFO] 10.244.0.31:33934 - 37134 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.001823644s
	[INFO] 10.244.0.31:59146 - 3797 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.001932416s
	
	
	==> describe nodes <==
	Name:               addons-757656
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-757656
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a39cecdc22b5fb611b15c7501c7459c3b4d2820
	                    minikube.k8s.io/name=addons-757656
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T08_30_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-757656
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-757656"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 08:30:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-757656
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 08:34:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 08:33:36 +0000   Sat, 18 Oct 2025 08:29:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 08:33:36 +0000   Sat, 18 Oct 2025 08:29:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 08:33:36 +0000   Sat, 18 Oct 2025 08:29:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 08:33:36 +0000   Sat, 18 Oct 2025 08:30:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-757656
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                cfce5e10-5e2d-40cf-8446-b5fe69082a53
	  Boot ID:                    e8d7ef1f-87bb-488c-8381-e18fe85b484f
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m46s
	  default                     cloud-spanner-emulator-86bd5cbb97-x5chs      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m26s
	  default                     hello-world-app-5d498dc89-9r88h              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  gadget                      gadget-km4ch                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m26s
	  gcp-auth                    gcp-auth-78565c9fb4-z25bv                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m19s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-9bx4w    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         4m26s
	  kube-system                 amd-gpu-device-plugin-v82lt                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m46s
	  kube-system                 coredns-66bc5c9577-jc8rc                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m27s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m26s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m26s
	  kube-system                 csi-hostpathplugin-cdc5c                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m46s
	  kube-system                 etcd-addons-757656                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m33s
	  kube-system                 kindnet-tdxms                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m28s
	  kube-system                 kube-apiserver-addons-757656                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m33s
	  kube-system                 kube-controller-manager-addons-757656        200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m33s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m26s
	  kube-system                 kube-proxy-gw6hz                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m28s
	  kube-system                 kube-scheduler-addons-757656                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m33s
	  kube-system                 metrics-server-85b7d694d7-vl9c2              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         4m26s
	  kube-system                 nvidia-device-plugin-daemonset-bnzlc         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m46s
	  kube-system                 registry-6b586f9694-lbbgc                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m26s
	  kube-system                 registry-creds-764b6fb674-h7xh9              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	  kube-system                 registry-proxy-7g848                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m46s
	  kube-system                 snapshot-controller-7d9fbc56b8-7zt7h         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m25s
	  kube-system                 snapshot-controller-7d9fbc56b8-nbz64         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m25s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m26s
	  local-path-storage          local-path-provisioner-648f6765c9-f8n49      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m26s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-v82m4               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     4m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m25s  kube-proxy       
	  Normal  Starting                 4m33s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m33s  kubelet          Node addons-757656 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m33s  kubelet          Node addons-757656 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m33s  kubelet          Node addons-757656 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m28s  node-controller  Node addons-757656 event: Registered Node addons-757656 in Controller
	  Normal  NodeReady                3m46s  kubelet          Node addons-757656 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.101295] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028366] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.196963] kauditd_printk_skb: 47 callbacks suppressed
	[Oct18 08:32] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 0e c2 2a eb 06 e6 66 0c bb bb 22 50 08 00
	[  +1.012248] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 0e c2 2a eb 06 e6 66 0c bb bb 22 50 08 00
	[  +1.023893] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 0e c2 2a eb 06 e6 66 0c bb bb 22 50 08 00
	[  +1.023849] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 0e c2 2a eb 06 e6 66 0c bb bb 22 50 08 00
	[  +1.023875] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 0e c2 2a eb 06 e6 66 0c bb bb 22 50 08 00
	[  +1.024040] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 0e c2 2a eb 06 e6 66 0c bb bb 22 50 08 00
	[  +2.047589] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 0e c2 2a eb 06 e6 66 0c bb bb 22 50 08 00
	[  +4.031586] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 0e c2 2a eb 06 e6 66 0c bb bb 22 50 08 00
	[  +8.255150] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 0e c2 2a eb 06 e6 66 0c bb bb 22 50 08 00
	[ +16.382250] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000028] ll header: 00000000: 0e c2 2a eb 06 e6 66 0c bb bb 22 50 08 00
	[Oct18 08:33] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 0e c2 2a eb 06 e6 66 0c bb bb 22 50 08 00
	
	
	==> etcd [56d69d63fccc147fc338479d722142a993f3013be2c188974a95d01e019bcb14] <==
	{"level":"warn","ts":"2025-10-18T08:29:59.591564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:29:59.597548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:29:59.603984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:29:59.610520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:29:59.619588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:29:59.626700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:29:59.634245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:29:59.641845Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:29:59.648926Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:29:59.655856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:29:59.662442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:29:59.669912Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:29:59.677101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:29:59.691439Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:29:59.703560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:29:59.707897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:29:59.721037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:29:59.768492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:30:10.612689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:30:10.619203Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:30:37.173777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:30:37.193032Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:30:37.200641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47566","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T08:31:04.763809Z","caller":"traceutil/trace.go:172","msg":"trace[186939196] transaction","detail":"{read_only:false; response_revision:1024; number_of_response:1; }","duration":"132.771865ms","start":"2025-10-18T08:31:04.631019Z","end":"2025-10-18T08:31:04.763791Z","steps":["trace[186939196] 'process raft request'  (duration: 77.482815ms)","trace[186939196] 'compare'  (duration: 55.20424ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T08:31:04.793729Z","caller":"traceutil/trace.go:172","msg":"trace[1138554988] transaction","detail":"{read_only:false; response_revision:1025; number_of_response:1; }","duration":"128.09833ms","start":"2025-10-18T08:31:04.665610Z","end":"2025-10-18T08:31:04.793709Z","steps":["trace[1138554988] 'process raft request'  (duration: 127.994937ms)"],"step_count":1}
	
	
	==> gcp-auth [0541dfc3f0d13512640cbd84c6b19fd0626e07b1e5447e26f099f37e1e6efdf6] <==
	2025/10/18 08:31:29 GCP Auth Webhook started!
	2025/10/18 08:31:49 Ready to marshal response ...
	2025/10/18 08:31:49 Ready to write response ...
	2025/10/18 08:31:49 Ready to marshal response ...
	2025/10/18 08:31:49 Ready to write response ...
	2025/10/18 08:31:49 Ready to marshal response ...
	2025/10/18 08:31:49 Ready to write response ...
	2025/10/18 08:32:07 Ready to marshal response ...
	2025/10/18 08:32:07 Ready to write response ...
	2025/10/18 08:32:08 Ready to marshal response ...
	2025/10/18 08:32:08 Ready to write response ...
	2025/10/18 08:32:08 Ready to marshal response ...
	2025/10/18 08:32:08 Ready to write response ...
	2025/10/18 08:32:10 Ready to marshal response ...
	2025/10/18 08:32:10 Ready to write response ...
	2025/10/18 08:32:14 Ready to marshal response ...
	2025/10/18 08:32:14 Ready to write response ...
	2025/10/18 08:32:18 Ready to marshal response ...
	2025/10/18 08:32:18 Ready to write response ...
	2025/10/18 08:32:48 Ready to marshal response ...
	2025/10/18 08:32:48 Ready to write response ...
	2025/10/18 08:34:33 Ready to marshal response ...
	2025/10/18 08:34:33 Ready to write response ...
	
	
	==> kernel <==
	 08:34:35 up 17 min,  0 user,  load average: 0.23, 0.49, 0.26
	Linux addons-757656 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1511480aef50d5e66eab7e6f72a8a21ffb3a3ad656dc0ce5a729ee3afe26e9c7] <==
	I1018 08:32:29.520460       1 main.go:301] handling current node
	I1018 08:32:39.522558       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:32:39.522592       1 main.go:301] handling current node
	I1018 08:32:49.520934       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:32:49.520978       1 main.go:301] handling current node
	I1018 08:32:59.521462       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:32:59.521494       1 main.go:301] handling current node
	I1018 08:33:09.521696       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:33:09.521735       1 main.go:301] handling current node
	I1018 08:33:19.527553       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:33:19.527601       1 main.go:301] handling current node
	I1018 08:33:29.528799       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:33:29.528842       1 main.go:301] handling current node
	I1018 08:33:39.522452       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:33:39.522483       1 main.go:301] handling current node
	I1018 08:33:49.520645       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:33:49.520675       1 main.go:301] handling current node
	I1018 08:33:59.523978       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:33:59.524013       1 main.go:301] handling current node
	I1018 08:34:09.521137       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:34:09.521167       1 main.go:301] handling current node
	I1018 08:34:19.527024       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:34:19.527054       1 main.go:301] handling current node
	I1018 08:34:29.522218       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:34:29.522246       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4faa6d23dba1bdc8c7eba89649f47072c5b426937bf2a2b10aa4b52d39f44cf8] <==
	W1018 08:30:37.200608       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1018 08:30:49.731228       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.101.138.23:443: connect: connection refused
	E1018 08:30:49.731373       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.101.138.23:443: connect: connection refused" logger="UnhandledError"
	W1018 08:30:49.731303       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.101.138.23:443: connect: connection refused
	E1018 08:30:49.731756       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.101.138.23:443: connect: connection refused" logger="UnhandledError"
	W1018 08:30:49.748421       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.101.138.23:443: connect: connection refused
	E1018 08:30:49.748570       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.101.138.23:443: connect: connection refused" logger="UnhandledError"
	W1018 08:30:49.751954       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.101.138.23:443: connect: connection refused
	E1018 08:30:49.751987       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.101.138.23:443: connect: connection refused" logger="UnhandledError"
	E1018 08:30:52.597396       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.150.136:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.99.150.136:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.99.150.136:443: connect: connection refused" logger="UnhandledError"
	W1018 08:30:52.597620       1 handler_proxy.go:99] no RequestInfo found in the context
	E1018 08:30:52.597690       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1018 08:30:52.597993       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.150.136:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.99.150.136:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.99.150.136:443: connect: connection refused" logger="UnhandledError"
	E1018 08:30:52.603482       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.150.136:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.99.150.136:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.99.150.136:443: connect: connection refused" logger="UnhandledError"
	E1018 08:30:52.624009       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.150.136:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.99.150.136:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.99.150.136:443: connect: connection refused" logger="UnhandledError"
	I1018 08:30:52.691154       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1018 08:31:56.676227       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:57468: use of closed network connection
	E1018 08:31:56.829900       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:57492: use of closed network connection
	I1018 08:32:10.601682       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1018 08:32:10.797664       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.98.132.173"}
	I1018 08:32:25.974494       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1018 08:34:33.916463       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.97.135.125"}
	
	
	==> kube-controller-manager [52adc977887b4b184292fed9d6952cb67b4fb667289dd8df966abb85c03aaa46] <==
	I1018 08:30:07.150319       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 08:30:07.150272       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 08:30:07.150336       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 08:30:07.150365       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1018 08:30:07.150420       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 08:30:07.150678       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 08:30:07.154907       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 08:30:07.154965       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 08:30:07.156140       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 08:30:07.162397       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1018 08:30:07.162476       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 08:30:07.162515       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 08:30:07.162520       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 08:30:07.162524       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 08:30:07.168894       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-757656" podCIDRs=["10.244.0.0/24"]
	I1018 08:30:07.174117       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1018 08:30:09.419075       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1018 08:30:37.160789       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1018 08:30:37.160911       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1018 08:30:37.160960       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1018 08:30:37.183640       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1018 08:30:37.187317       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1018 08:30:37.261822       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 08:30:37.288067       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 08:30:52.104954       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [6c971e87dacce692e9ba51b9df623358656653ca81349c910a48ee4deca9701c] <==
	I1018 08:30:09.228223       1 server_linux.go:53] "Using iptables proxy"
	I1018 08:30:09.328988       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 08:30:09.430082       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 08:30:09.430129       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1018 08:30:09.430217       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 08:30:09.526500       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 08:30:09.526570       1 server_linux.go:132] "Using iptables Proxier"
	I1018 08:30:09.536489       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 08:30:09.543930       1 server.go:527] "Version info" version="v1.34.1"
	I1018 08:30:09.544089       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 08:30:09.549479       1 config.go:309] "Starting node config controller"
	I1018 08:30:09.549501       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 08:30:09.549510       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 08:30:09.550043       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 08:30:09.550303       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 08:30:09.550236       1 config.go:106] "Starting endpoint slice config controller"
	I1018 08:30:09.550400       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 08:30:09.550629       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 08:30:09.550332       1 config.go:200] "Starting service config controller"
	I1018 08:30:09.552481       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 08:30:09.651056       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 08:30:09.653514       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [717994737c9e9e736b5e73abe6513db6ce8ecf19404100a264fa9c13ee71f047] <==
	E1018 08:30:00.183376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 08:30:00.183415       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 08:30:00.183433       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 08:30:00.184115       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 08:30:00.184152       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 08:30:00.184291       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 08:30:00.184450       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 08:30:00.184497       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 08:30:00.183405       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 08:30:00.184540       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 08:30:00.184649       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 08:30:00.184744       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 08:30:00.993035       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 08:30:01.062377       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 08:30:01.074615       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 08:30:01.096153       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 08:30:01.160213       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 08:30:01.186304       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1018 08:30:01.295719       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 08:30:01.305684       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 08:30:01.330915       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 08:30:01.336022       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 08:30:01.336081       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 08:30:01.430742       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1018 08:30:04.272611       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 08:32:55 addons-757656 kubelet[1313]: I1018 08:32:55.710778    1313 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^06f0112f-abfd-11f0-8c04-0aec155af7e3\") pod \"828ba3a4-ce64-4049-90e1-7d3ead7c12a4\" (UID: \"828ba3a4-ce64-4049-90e1-7d3ead7c12a4\") "
	Oct 18 08:32:55 addons-757656 kubelet[1313]: I1018 08:32:55.710817    1313 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/828ba3a4-ce64-4049-90e1-7d3ead7c12a4-gcp-creds\") pod \"828ba3a4-ce64-4049-90e1-7d3ead7c12a4\" (UID: \"828ba3a4-ce64-4049-90e1-7d3ead7c12a4\") "
	Oct 18 08:32:55 addons-757656 kubelet[1313]: I1018 08:32:55.710841    1313 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zf2pz\" (UniqueName: \"kubernetes.io/projected/828ba3a4-ce64-4049-90e1-7d3ead7c12a4-kube-api-access-zf2pz\") pod \"828ba3a4-ce64-4049-90e1-7d3ead7c12a4\" (UID: \"828ba3a4-ce64-4049-90e1-7d3ead7c12a4\") "
	Oct 18 08:32:55 addons-757656 kubelet[1313]: I1018 08:32:55.710919    1313 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/828ba3a4-ce64-4049-90e1-7d3ead7c12a4-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "828ba3a4-ce64-4049-90e1-7d3ead7c12a4" (UID: "828ba3a4-ce64-4049-90e1-7d3ead7c12a4"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Oct 18 08:32:55 addons-757656 kubelet[1313]: I1018 08:32:55.711010    1313 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/828ba3a4-ce64-4049-90e1-7d3ead7c12a4-gcp-creds\") on node \"addons-757656\" DevicePath \"\""
	Oct 18 08:32:55 addons-757656 kubelet[1313]: I1018 08:32:55.713112    1313 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/828ba3a4-ce64-4049-90e1-7d3ead7c12a4-kube-api-access-zf2pz" (OuterVolumeSpecName: "kube-api-access-zf2pz") pod "828ba3a4-ce64-4049-90e1-7d3ead7c12a4" (UID: "828ba3a4-ce64-4049-90e1-7d3ead7c12a4"). InnerVolumeSpecName "kube-api-access-zf2pz". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 18 08:32:55 addons-757656 kubelet[1313]: I1018 08:32:55.713899    1313 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^06f0112f-abfd-11f0-8c04-0aec155af7e3" (OuterVolumeSpecName: "task-pv-storage") pod "828ba3a4-ce64-4049-90e1-7d3ead7c12a4" (UID: "828ba3a4-ce64-4049-90e1-7d3ead7c12a4"). InnerVolumeSpecName "pvc-bb49d276-a156-4a26-9890-786457655300". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Oct 18 08:32:55 addons-757656 kubelet[1313]: I1018 08:32:55.811313    1313 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zf2pz\" (UniqueName: \"kubernetes.io/projected/828ba3a4-ce64-4049-90e1-7d3ead7c12a4-kube-api-access-zf2pz\") on node \"addons-757656\" DevicePath \"\""
	Oct 18 08:32:55 addons-757656 kubelet[1313]: I1018 08:32:55.811423    1313 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-bb49d276-a156-4a26-9890-786457655300\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^06f0112f-abfd-11f0-8c04-0aec155af7e3\") on node \"addons-757656\" "
	Oct 18 08:32:55 addons-757656 kubelet[1313]: I1018 08:32:55.815738    1313 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-bb49d276-a156-4a26-9890-786457655300" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^06f0112f-abfd-11f0-8c04-0aec155af7e3") on node "addons-757656"
	Oct 18 08:32:55 addons-757656 kubelet[1313]: I1018 08:32:55.912044    1313 reconciler_common.go:299] "Volume detached for volume \"pvc-bb49d276-a156-4a26-9890-786457655300\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^06f0112f-abfd-11f0-8c04-0aec155af7e3\") on node \"addons-757656\" DevicePath \"\""
	Oct 18 08:32:56 addons-757656 kubelet[1313]: I1018 08:32:56.128618    1313 scope.go:117] "RemoveContainer" containerID="d267a57c5364ac85c3e97df08f460955fd2a417d308dac59f22ba3068bd248b0"
	Oct 18 08:32:56 addons-757656 kubelet[1313]: I1018 08:32:56.139329    1313 scope.go:117] "RemoveContainer" containerID="d267a57c5364ac85c3e97df08f460955fd2a417d308dac59f22ba3068bd248b0"
	Oct 18 08:32:56 addons-757656 kubelet[1313]: E1018 08:32:56.139859    1313 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d267a57c5364ac85c3e97df08f460955fd2a417d308dac59f22ba3068bd248b0\": container with ID starting with d267a57c5364ac85c3e97df08f460955fd2a417d308dac59f22ba3068bd248b0 not found: ID does not exist" containerID="d267a57c5364ac85c3e97df08f460955fd2a417d308dac59f22ba3068bd248b0"
	Oct 18 08:32:56 addons-757656 kubelet[1313]: I1018 08:32:56.139909    1313 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d267a57c5364ac85c3e97df08f460955fd2a417d308dac59f22ba3068bd248b0"} err="failed to get container status \"d267a57c5364ac85c3e97df08f460955fd2a417d308dac59f22ba3068bd248b0\": rpc error: code = NotFound desc = could not find container \"d267a57c5364ac85c3e97df08f460955fd2a417d308dac59f22ba3068bd248b0\": container with ID starting with d267a57c5364ac85c3e97df08f460955fd2a417d308dac59f22ba3068bd248b0 not found: ID does not exist"
	Oct 18 08:32:56 addons-757656 kubelet[1313]: I1018 08:32:56.431598    1313 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="828ba3a4-ce64-4049-90e1-7d3ead7c12a4" path="/var/lib/kubelet/pods/828ba3a4-ce64-4049-90e1-7d3ead7c12a4/volumes"
	Oct 18 08:33:02 addons-757656 kubelet[1313]: I1018 08:33:02.448239    1313 scope.go:117] "RemoveContainer" containerID="29f61cdf67dfeeac8240cb95a50e45f6e4f8df9c39cb144a3d2e69b64d61dbfb"
	Oct 18 08:33:02 addons-757656 kubelet[1313]: I1018 08:33:02.456515    1313 scope.go:117] "RemoveContainer" containerID="71801a91e436b7bc14d0d1342a4fa8aa84de57dbc9db4e86b3fe1c96b7965728"
	Oct 18 08:33:02 addons-757656 kubelet[1313]: I1018 08:33:02.464603    1313 scope.go:117] "RemoveContainer" containerID="f3797d1eb4d55cda9fab8175019202d6aea705dc874edbd2614585e8245715fe"
	Oct 18 08:33:06 addons-757656 kubelet[1313]: I1018 08:33:06.183578    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-creds-764b6fb674-h7xh9" podStartSLOduration=177.126513918 podStartE2EDuration="2m58.183558282s" podCreationTimestamp="2025-10-18 08:30:08 +0000 UTC" firstStartedPulling="2025-10-18 08:33:04.450358164 +0000 UTC m=+182.102574628" lastFinishedPulling="2025-10-18 08:33:05.507402543 +0000 UTC m=+183.159618992" observedRunningTime="2025-10-18 08:33:06.182077127 +0000 UTC m=+183.834293593" watchObservedRunningTime="2025-10-18 08:33:06.183558282 +0000 UTC m=+183.835774749"
	Oct 18 08:33:29 addons-757656 kubelet[1313]: I1018 08:33:29.428702    1313 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-bnzlc" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 08:33:54 addons-757656 kubelet[1313]: I1018 08:33:54.428126    1313 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-v82lt" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 08:34:03 addons-757656 kubelet[1313]: I1018 08:34:03.428701    1313 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-7g848" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 08:34:33 addons-757656 kubelet[1313]: I1018 08:34:33.927463    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4dtz\" (UniqueName: \"kubernetes.io/projected/ab7d6333-1491-4a5d-9072-b73c3dfd729c-kube-api-access-r4dtz\") pod \"hello-world-app-5d498dc89-9r88h\" (UID: \"ab7d6333-1491-4a5d-9072-b73c3dfd729c\") " pod="default/hello-world-app-5d498dc89-9r88h"
	Oct 18 08:34:33 addons-757656 kubelet[1313]: I1018 08:34:33.927550    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/ab7d6333-1491-4a5d-9072-b73c3dfd729c-gcp-creds\") pod \"hello-world-app-5d498dc89-9r88h\" (UID: \"ab7d6333-1491-4a5d-9072-b73c3dfd729c\") " pod="default/hello-world-app-5d498dc89-9r88h"
	
	
	==> storage-provisioner [7189be801872dca3adedf930d1116a7930ab711e9e199fc588f8ad5ec67c23c9] <==
	W1018 08:34:11.118081       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:34:13.121022       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:34:13.124854       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:34:15.127582       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:34:15.132680       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:34:17.135374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:34:17.140293       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:34:19.143011       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:34:19.146774       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:34:21.149547       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:34:21.154657       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:34:23.157506       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:34:23.161457       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:34:25.165019       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:34:25.169763       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:34:27.172786       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:34:27.176704       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:34:29.179560       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:34:29.183500       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:34:31.186321       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:34:31.191506       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:34:33.194401       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:34:33.198772       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:34:35.202428       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:34:35.206385       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-757656 -n addons-757656
helpers_test.go:269: (dbg) Run:  kubectl --context addons-757656 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-9r88h ingress-nginx-admission-create-s2qbg ingress-nginx-admission-patch-4jmq4
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-757656 describe pod hello-world-app-5d498dc89-9r88h ingress-nginx-admission-create-s2qbg ingress-nginx-admission-patch-4jmq4
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-757656 describe pod hello-world-app-5d498dc89-9r88h ingress-nginx-admission-create-s2qbg ingress-nginx-admission-patch-4jmq4: exit status 1 (66.095483ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-9r88h
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-757656/192.168.49.2
	Start Time:       Sat, 18 Oct 2025 08:34:33 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r4dtz (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-r4dtz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-9r88h to addons-757656
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"
	  Normal  Pulled     1s    kubelet            Successfully pulled image "docker.io/kicbase/echo-server:1.0" in 1.281s (1.281s including waiting). Image size: 4944818 bytes.
	  Normal  Created    1s    kubelet            Created container: hello-world-app
	  Normal  Started    1s    kubelet            Started container hello-world-app

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-s2qbg" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-4jmq4" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-757656 describe pod hello-world-app-5d498dc89-9r88h ingress-nginx-admission-create-s2qbg ingress-nginx-admission-patch-4jmq4: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-757656 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-757656 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (234.319878ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 08:34:36.455672   25603 out.go:360] Setting OutFile to fd 1 ...
	I1018 08:34:36.455971   25603 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:34:36.455982   25603 out.go:374] Setting ErrFile to fd 2...
	I1018 08:34:36.455987   25603 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:34:36.456183   25603 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-5897/.minikube/bin
	I1018 08:34:36.456476   25603 mustload.go:65] Loading cluster: addons-757656
	I1018 08:34:36.456829   25603 config.go:182] Loaded profile config "addons-757656": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:34:36.456843   25603 addons.go:606] checking whether the cluster is paused
	I1018 08:34:36.456919   25603 config.go:182] Loaded profile config "addons-757656": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:34:36.456930   25603 host.go:66] Checking if "addons-757656" exists ...
	I1018 08:34:36.457256   25603 cli_runner.go:164] Run: docker container inspect addons-757656 --format={{.State.Status}}
	I1018 08:34:36.475206   25603 ssh_runner.go:195] Run: systemctl --version
	I1018 08:34:36.475258   25603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:34:36.493390   25603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/addons-757656/id_rsa Username:docker}
	I1018 08:34:36.592102   25603 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 08:34:36.592207   25603 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 08:34:36.622727   25603 cri.go:89] found id: "c0ec2a9d24f81585da250d2c27c5190e0aae8b9cf02a4d28db3b0e890709a6ff"
	I1018 08:34:36.622749   25603 cri.go:89] found id: "4790d2a16058f3034a1b1e4ae855894aab7d7f3d7c610e86af4396f6d3498080"
	I1018 08:34:36.622755   25603 cri.go:89] found id: "48cc0cbf614ba386328b79cd306ce1fd90f8b4c338b8eb054421e5183efc5d4e"
	I1018 08:34:36.622760   25603 cri.go:89] found id: "c8379247a51b8242f6cb2cd6503d43ea0ed66dd9900fc8728b601695286a1d0a"
	I1018 08:34:36.622764   25603 cri.go:89] found id: "941a165621387aecf9f61fa5f0858b119aa2452338edbbe4ffbe1cff9b72292f"
	I1018 08:34:36.622769   25603 cri.go:89] found id: "14afe1f1816da18c2ce04153131d2aa122c50659a7faa8f9e40544d725a3d2c7"
	I1018 08:34:36.622773   25603 cri.go:89] found id: "440454c3da4c0302b146f97ed6c6f0e44df0c561a5f8d848e7e81218f08ef6db"
	I1018 08:34:36.622776   25603 cri.go:89] found id: "bbb9cb4aa33f4e4e42c90d9f8b44b4c8f4c50b6a89edc5b1893c52e37b664fed"
	I1018 08:34:36.622779   25603 cri.go:89] found id: "0ad5891c05dff52f7d29bd1edd32ab0a01ccc280a8974a244ec73419bd21a831"
	I1018 08:34:36.622786   25603 cri.go:89] found id: "f6e03a69b7bc41d32daae0fea75627f3e6bab34641aba500b0deec44241fa209"
	I1018 08:34:36.622789   25603 cri.go:89] found id: "4171876174cfa4f01c139bc1155ba660392b57736128ebf7bc1dca331bbcaee4"
	I1018 08:34:36.622791   25603 cri.go:89] found id: "813e46f6ecd6f3f0ed03b73a97ae5413d8bb65920271777404da143c0e902755"
	I1018 08:34:36.622794   25603 cri.go:89] found id: "ab63780aacfa0fb8341e0e937c7631e7eaf6c63690759abd7a5b64b2e83e3368"
	I1018 08:34:36.622797   25603 cri.go:89] found id: "72deecf66bdbeb5deb3d6951223a20fdac95c3fa8a32985ef6454a42357402e1"
	I1018 08:34:36.622799   25603 cri.go:89] found id: "cd1b8704f38dd764c4b72252086f2f53b2d98dc57ff0adbe8d204888170e994c"
	I1018 08:34:36.622808   25603 cri.go:89] found id: "c08a0e9528c61fd1b79ac94a2160e7ee63bc34d3e75541f78b2b3b9028daa8e1"
	I1018 08:34:36.622814   25603 cri.go:89] found id: "c216c132bff884542e58d26afd8de4c6ca39c00b23e91a35690a66be1be95c45"
	I1018 08:34:36.622818   25603 cri.go:89] found id: "dbf5c6e8579fb377ddb314d3b43db1406eed33b898a34570445f3dbda1c63266"
	I1018 08:34:36.622822   25603 cri.go:89] found id: "7189be801872dca3adedf930d1116a7930ab711e9e199fc588f8ad5ec67c23c9"
	I1018 08:34:36.622826   25603 cri.go:89] found id: "6c971e87dacce692e9ba51b9df623358656653ca81349c910a48ee4deca9701c"
	I1018 08:34:36.622830   25603 cri.go:89] found id: "1511480aef50d5e66eab7e6f72a8a21ffb3a3ad656dc0ce5a729ee3afe26e9c7"
	I1018 08:34:36.622835   25603 cri.go:89] found id: "52adc977887b4b184292fed9d6952cb67b4fb667289dd8df966abb85c03aaa46"
	I1018 08:34:36.622839   25603 cri.go:89] found id: "4faa6d23dba1bdc8c7eba89649f47072c5b426937bf2a2b10aa4b52d39f44cf8"
	I1018 08:34:36.622846   25603 cri.go:89] found id: "717994737c9e9e736b5e73abe6513db6ce8ecf19404100a264fa9c13ee71f047"
	I1018 08:34:36.622851   25603 cri.go:89] found id: "56d69d63fccc147fc338479d722142a993f3013be2c188974a95d01e019bcb14"
	I1018 08:34:36.622859   25603 cri.go:89] found id: ""
	I1018 08:34:36.622904   25603 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 08:34:36.636872   25603 out.go:203] 
	W1018 08:34:36.638185   25603 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:34:36Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:34:36Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 08:34:36.638204   25603 out.go:285] * 
	* 
	W1018 08:34:36.641259   25603 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 08:34:36.642766   25603 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-757656 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-757656 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-757656 addons disable ingress --alsologtostderr -v=1: exit status 11 (232.332668ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 08:34:36.690791   25668 out.go:360] Setting OutFile to fd 1 ...
	I1018 08:34:36.691101   25668 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:34:36.691112   25668 out.go:374] Setting ErrFile to fd 2...
	I1018 08:34:36.691116   25668 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:34:36.691332   25668 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-5897/.minikube/bin
	I1018 08:34:36.691645   25668 mustload.go:65] Loading cluster: addons-757656
	I1018 08:34:36.691979   25668 config.go:182] Loaded profile config "addons-757656": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:34:36.691994   25668 addons.go:606] checking whether the cluster is paused
	I1018 08:34:36.692075   25668 config.go:182] Loaded profile config "addons-757656": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:34:36.692086   25668 host.go:66] Checking if "addons-757656" exists ...
	I1018 08:34:36.692482   25668 cli_runner.go:164] Run: docker container inspect addons-757656 --format={{.State.Status}}
	I1018 08:34:36.711107   25668 ssh_runner.go:195] Run: systemctl --version
	I1018 08:34:36.711159   25668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:34:36.729870   25668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/addons-757656/id_rsa Username:docker}
	I1018 08:34:36.826099   25668 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 08:34:36.826191   25668 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 08:34:36.854654   25668 cri.go:89] found id: "c0ec2a9d24f81585da250d2c27c5190e0aae8b9cf02a4d28db3b0e890709a6ff"
	I1018 08:34:36.854680   25668 cri.go:89] found id: "4790d2a16058f3034a1b1e4ae855894aab7d7f3d7c610e86af4396f6d3498080"
	I1018 08:34:36.854685   25668 cri.go:89] found id: "48cc0cbf614ba386328b79cd306ce1fd90f8b4c338b8eb054421e5183efc5d4e"
	I1018 08:34:36.854688   25668 cri.go:89] found id: "c8379247a51b8242f6cb2cd6503d43ea0ed66dd9900fc8728b601695286a1d0a"
	I1018 08:34:36.854690   25668 cri.go:89] found id: "941a165621387aecf9f61fa5f0858b119aa2452338edbbe4ffbe1cff9b72292f"
	I1018 08:34:36.854694   25668 cri.go:89] found id: "14afe1f1816da18c2ce04153131d2aa122c50659a7faa8f9e40544d725a3d2c7"
	I1018 08:34:36.854697   25668 cri.go:89] found id: "440454c3da4c0302b146f97ed6c6f0e44df0c561a5f8d848e7e81218f08ef6db"
	I1018 08:34:36.854699   25668 cri.go:89] found id: "bbb9cb4aa33f4e4e42c90d9f8b44b4c8f4c50b6a89edc5b1893c52e37b664fed"
	I1018 08:34:36.854702   25668 cri.go:89] found id: "0ad5891c05dff52f7d29bd1edd32ab0a01ccc280a8974a244ec73419bd21a831"
	I1018 08:34:36.854712   25668 cri.go:89] found id: "f6e03a69b7bc41d32daae0fea75627f3e6bab34641aba500b0deec44241fa209"
	I1018 08:34:36.854719   25668 cri.go:89] found id: "4171876174cfa4f01c139bc1155ba660392b57736128ebf7bc1dca331bbcaee4"
	I1018 08:34:36.854722   25668 cri.go:89] found id: "813e46f6ecd6f3f0ed03b73a97ae5413d8bb65920271777404da143c0e902755"
	I1018 08:34:36.854724   25668 cri.go:89] found id: "ab63780aacfa0fb8341e0e937c7631e7eaf6c63690759abd7a5b64b2e83e3368"
	I1018 08:34:36.854726   25668 cri.go:89] found id: "72deecf66bdbeb5deb3d6951223a20fdac95c3fa8a32985ef6454a42357402e1"
	I1018 08:34:36.854729   25668 cri.go:89] found id: "cd1b8704f38dd764c4b72252086f2f53b2d98dc57ff0adbe8d204888170e994c"
	I1018 08:34:36.854738   25668 cri.go:89] found id: "c08a0e9528c61fd1b79ac94a2160e7ee63bc34d3e75541f78b2b3b9028daa8e1"
	I1018 08:34:36.854745   25668 cri.go:89] found id: "c216c132bff884542e58d26afd8de4c6ca39c00b23e91a35690a66be1be95c45"
	I1018 08:34:36.854751   25668 cri.go:89] found id: "dbf5c6e8579fb377ddb314d3b43db1406eed33b898a34570445f3dbda1c63266"
	I1018 08:34:36.854755   25668 cri.go:89] found id: "7189be801872dca3adedf930d1116a7930ab711e9e199fc588f8ad5ec67c23c9"
	I1018 08:34:36.854758   25668 cri.go:89] found id: "6c971e87dacce692e9ba51b9df623358656653ca81349c910a48ee4deca9701c"
	I1018 08:34:36.854762   25668 cri.go:89] found id: "1511480aef50d5e66eab7e6f72a8a21ffb3a3ad656dc0ce5a729ee3afe26e9c7"
	I1018 08:34:36.854766   25668 cri.go:89] found id: "52adc977887b4b184292fed9d6952cb67b4fb667289dd8df966abb85c03aaa46"
	I1018 08:34:36.854770   25668 cri.go:89] found id: "4faa6d23dba1bdc8c7eba89649f47072c5b426937bf2a2b10aa4b52d39f44cf8"
	I1018 08:34:36.854774   25668 cri.go:89] found id: "717994737c9e9e736b5e73abe6513db6ce8ecf19404100a264fa9c13ee71f047"
	I1018 08:34:36.854777   25668 cri.go:89] found id: "56d69d63fccc147fc338479d722142a993f3013be2c188974a95d01e019bcb14"
	I1018 08:34:36.854780   25668 cri.go:89] found id: ""
	I1018 08:34:36.854825   25668 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 08:34:36.869683   25668 out.go:203] 
	W1018 08:34:36.871107   25668 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:34:36Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:34:36Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 08:34:36.871125   25668 out.go:285] * 
	* 
	W1018 08:34:36.874141   25668 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 08:34:36.875432   25668 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-757656 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (146.52s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.25s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-km4ch" [50138f19-ad8a-4813-81cf-49b16340dcad] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003558964s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-757656 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-757656 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (246.91603ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 08:32:19.860391   22491 out.go:360] Setting OutFile to fd 1 ...
	I1018 08:32:19.860820   22491 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:32:19.860834   22491 out.go:374] Setting ErrFile to fd 2...
	I1018 08:32:19.860841   22491 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:32:19.861306   22491 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-5897/.minikube/bin
	I1018 08:32:19.861837   22491 mustload.go:65] Loading cluster: addons-757656
	I1018 08:32:19.862388   22491 config.go:182] Loaded profile config "addons-757656": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:32:19.862412   22491 addons.go:606] checking whether the cluster is paused
	I1018 08:32:19.862516   22491 config.go:182] Loaded profile config "addons-757656": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:32:19.862533   22491 host.go:66] Checking if "addons-757656" exists ...
	I1018 08:32:19.862932   22491 cli_runner.go:164] Run: docker container inspect addons-757656 --format={{.State.Status}}
	I1018 08:32:19.883519   22491 ssh_runner.go:195] Run: systemctl --version
	I1018 08:32:19.883566   22491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:32:19.903737   22491 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/addons-757656/id_rsa Username:docker}
	I1018 08:32:20.005474   22491 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 08:32:20.005566   22491 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 08:32:20.037550   22491 cri.go:89] found id: "4790d2a16058f3034a1b1e4ae855894aab7d7f3d7c610e86af4396f6d3498080"
	I1018 08:32:20.037582   22491 cri.go:89] found id: "48cc0cbf614ba386328b79cd306ce1fd90f8b4c338b8eb054421e5183efc5d4e"
	I1018 08:32:20.037588   22491 cri.go:89] found id: "c8379247a51b8242f6cb2cd6503d43ea0ed66dd9900fc8728b601695286a1d0a"
	I1018 08:32:20.037603   22491 cri.go:89] found id: "941a165621387aecf9f61fa5f0858b119aa2452338edbbe4ffbe1cff9b72292f"
	I1018 08:32:20.037607   22491 cri.go:89] found id: "14afe1f1816da18c2ce04153131d2aa122c50659a7faa8f9e40544d725a3d2c7"
	I1018 08:32:20.037612   22491 cri.go:89] found id: "440454c3da4c0302b146f97ed6c6f0e44df0c561a5f8d848e7e81218f08ef6db"
	I1018 08:32:20.037616   22491 cri.go:89] found id: "bbb9cb4aa33f4e4e42c90d9f8b44b4c8f4c50b6a89edc5b1893c52e37b664fed"
	I1018 08:32:20.037619   22491 cri.go:89] found id: "0ad5891c05dff52f7d29bd1edd32ab0a01ccc280a8974a244ec73419bd21a831"
	I1018 08:32:20.037623   22491 cri.go:89] found id: "f6e03a69b7bc41d32daae0fea75627f3e6bab34641aba500b0deec44241fa209"
	I1018 08:32:20.037631   22491 cri.go:89] found id: "4171876174cfa4f01c139bc1155ba660392b57736128ebf7bc1dca331bbcaee4"
	I1018 08:32:20.037636   22491 cri.go:89] found id: "813e46f6ecd6f3f0ed03b73a97ae5413d8bb65920271777404da143c0e902755"
	I1018 08:32:20.037640   22491 cri.go:89] found id: "ab63780aacfa0fb8341e0e937c7631e7eaf6c63690759abd7a5b64b2e83e3368"
	I1018 08:32:20.037643   22491 cri.go:89] found id: "72deecf66bdbeb5deb3d6951223a20fdac95c3fa8a32985ef6454a42357402e1"
	I1018 08:32:20.037647   22491 cri.go:89] found id: "cd1b8704f38dd764c4b72252086f2f53b2d98dc57ff0adbe8d204888170e994c"
	I1018 08:32:20.037652   22491 cri.go:89] found id: "c08a0e9528c61fd1b79ac94a2160e7ee63bc34d3e75541f78b2b3b9028daa8e1"
	I1018 08:32:20.037666   22491 cri.go:89] found id: "c216c132bff884542e58d26afd8de4c6ca39c00b23e91a35690a66be1be95c45"
	I1018 08:32:20.037672   22491 cri.go:89] found id: "dbf5c6e8579fb377ddb314d3b43db1406eed33b898a34570445f3dbda1c63266"
	I1018 08:32:20.037676   22491 cri.go:89] found id: "7189be801872dca3adedf930d1116a7930ab711e9e199fc588f8ad5ec67c23c9"
	I1018 08:32:20.037678   22491 cri.go:89] found id: "6c971e87dacce692e9ba51b9df623358656653ca81349c910a48ee4deca9701c"
	I1018 08:32:20.037681   22491 cri.go:89] found id: "1511480aef50d5e66eab7e6f72a8a21ffb3a3ad656dc0ce5a729ee3afe26e9c7"
	I1018 08:32:20.037683   22491 cri.go:89] found id: "52adc977887b4b184292fed9d6952cb67b4fb667289dd8df966abb85c03aaa46"
	I1018 08:32:20.037686   22491 cri.go:89] found id: "4faa6d23dba1bdc8c7eba89649f47072c5b426937bf2a2b10aa4b52d39f44cf8"
	I1018 08:32:20.037688   22491 cri.go:89] found id: "717994737c9e9e736b5e73abe6513db6ce8ecf19404100a264fa9c13ee71f047"
	I1018 08:32:20.037691   22491 cri.go:89] found id: "56d69d63fccc147fc338479d722142a993f3013be2c188974a95d01e019bcb14"
	I1018 08:32:20.037693   22491 cri.go:89] found id: ""
	I1018 08:32:20.037749   22491 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 08:32:20.052971   22491 out.go:203] 
	W1018 08:32:20.054455   22491 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:32:20Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:32:20Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 08:32:20.054476   22491 out.go:285] * 
	* 
	W1018 08:32:20.057710   22491 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 08:32:20.058977   22491 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-757656 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.25s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.31s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.227416ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-vl9c2" [662d002a-23b4-4f7f-a0bc-3a0813819aa2] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.002526744s
addons_test.go:463: (dbg) Run:  kubectl --context addons-757656 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-757656 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-757656 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (236.48907ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 08:32:02.186575   20131 out.go:360] Setting OutFile to fd 1 ...
	I1018 08:32:02.186722   20131 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:32:02.186732   20131 out.go:374] Setting ErrFile to fd 2...
	I1018 08:32:02.186736   20131 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:32:02.186943   20131 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-5897/.minikube/bin
	I1018 08:32:02.187399   20131 mustload.go:65] Loading cluster: addons-757656
	I1018 08:32:02.187815   20131 config.go:182] Loaded profile config "addons-757656": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:32:02.187835   20131 addons.go:606] checking whether the cluster is paused
	I1018 08:32:02.187929   20131 config.go:182] Loaded profile config "addons-757656": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:32:02.187945   20131 host.go:66] Checking if "addons-757656" exists ...
	I1018 08:32:02.188363   20131 cli_runner.go:164] Run: docker container inspect addons-757656 --format={{.State.Status}}
	I1018 08:32:02.207672   20131 ssh_runner.go:195] Run: systemctl --version
	I1018 08:32:02.207862   20131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:32:02.226822   20131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/addons-757656/id_rsa Username:docker}
	I1018 08:32:02.323822   20131 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 08:32:02.323901   20131 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 08:32:02.352907   20131 cri.go:89] found id: "4790d2a16058f3034a1b1e4ae855894aab7d7f3d7c610e86af4396f6d3498080"
	I1018 08:32:02.352933   20131 cri.go:89] found id: "48cc0cbf614ba386328b79cd306ce1fd90f8b4c338b8eb054421e5183efc5d4e"
	I1018 08:32:02.352937   20131 cri.go:89] found id: "c8379247a51b8242f6cb2cd6503d43ea0ed66dd9900fc8728b601695286a1d0a"
	I1018 08:32:02.352940   20131 cri.go:89] found id: "941a165621387aecf9f61fa5f0858b119aa2452338edbbe4ffbe1cff9b72292f"
	I1018 08:32:02.352943   20131 cri.go:89] found id: "14afe1f1816da18c2ce04153131d2aa122c50659a7faa8f9e40544d725a3d2c7"
	I1018 08:32:02.352945   20131 cri.go:89] found id: "440454c3da4c0302b146f97ed6c6f0e44df0c561a5f8d848e7e81218f08ef6db"
	I1018 08:32:02.352949   20131 cri.go:89] found id: "bbb9cb4aa33f4e4e42c90d9f8b44b4c8f4c50b6a89edc5b1893c52e37b664fed"
	I1018 08:32:02.352953   20131 cri.go:89] found id: "0ad5891c05dff52f7d29bd1edd32ab0a01ccc280a8974a244ec73419bd21a831"
	I1018 08:32:02.352957   20131 cri.go:89] found id: "f6e03a69b7bc41d32daae0fea75627f3e6bab34641aba500b0deec44241fa209"
	I1018 08:32:02.352964   20131 cri.go:89] found id: "4171876174cfa4f01c139bc1155ba660392b57736128ebf7bc1dca331bbcaee4"
	I1018 08:32:02.352968   20131 cri.go:89] found id: "813e46f6ecd6f3f0ed03b73a97ae5413d8bb65920271777404da143c0e902755"
	I1018 08:32:02.352973   20131 cri.go:89] found id: "ab63780aacfa0fb8341e0e937c7631e7eaf6c63690759abd7a5b64b2e83e3368"
	I1018 08:32:02.352977   20131 cri.go:89] found id: "72deecf66bdbeb5deb3d6951223a20fdac95c3fa8a32985ef6454a42357402e1"
	I1018 08:32:02.352981   20131 cri.go:89] found id: "cd1b8704f38dd764c4b72252086f2f53b2d98dc57ff0adbe8d204888170e994c"
	I1018 08:32:02.352985   20131 cri.go:89] found id: "c08a0e9528c61fd1b79ac94a2160e7ee63bc34d3e75541f78b2b3b9028daa8e1"
	I1018 08:32:02.352992   20131 cri.go:89] found id: "c216c132bff884542e58d26afd8de4c6ca39c00b23e91a35690a66be1be95c45"
	I1018 08:32:02.352999   20131 cri.go:89] found id: "dbf5c6e8579fb377ddb314d3b43db1406eed33b898a34570445f3dbda1c63266"
	I1018 08:32:02.353003   20131 cri.go:89] found id: "7189be801872dca3adedf930d1116a7930ab711e9e199fc588f8ad5ec67c23c9"
	I1018 08:32:02.353006   20131 cri.go:89] found id: "6c971e87dacce692e9ba51b9df623358656653ca81349c910a48ee4deca9701c"
	I1018 08:32:02.353008   20131 cri.go:89] found id: "1511480aef50d5e66eab7e6f72a8a21ffb3a3ad656dc0ce5a729ee3afe26e9c7"
	I1018 08:32:02.353015   20131 cri.go:89] found id: "52adc977887b4b184292fed9d6952cb67b4fb667289dd8df966abb85c03aaa46"
	I1018 08:32:02.353017   20131 cri.go:89] found id: "4faa6d23dba1bdc8c7eba89649f47072c5b426937bf2a2b10aa4b52d39f44cf8"
	I1018 08:32:02.353020   20131 cri.go:89] found id: "717994737c9e9e736b5e73abe6513db6ce8ecf19404100a264fa9c13ee71f047"
	I1018 08:32:02.353022   20131 cri.go:89] found id: "56d69d63fccc147fc338479d722142a993f3013be2c188974a95d01e019bcb14"
	I1018 08:32:02.353024   20131 cri.go:89] found id: ""
	I1018 08:32:02.353068   20131 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 08:32:02.367205   20131 out.go:203] 
	W1018 08:32:02.368423   20131 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:32:02Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:32:02Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 08:32:02.368446   20131 out.go:285] * 
	* 
	W1018 08:32:02.371407   20131 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 08:32:02.372939   20131 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-757656 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.31s)

                                                
                                    
x
+
TestAddons/parallel/CSI (57.32s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1018 08:31:59.617658    9394 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1018 08:31:59.620993    9394 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1018 08:31:59.621023    9394 kapi.go:107] duration metric: took 3.379952ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.393379ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-757656 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757656 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757656 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757656 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757656 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757656 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757656 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757656 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757656 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757656 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757656 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757656 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757656 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757656 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757656 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757656 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757656 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-757656 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [3b3f2b07-d9d5-4809-b36b-1de6da5a58f7] Pending
helpers_test.go:352: "task-pv-pod" [3b3f2b07-d9d5-4809-b36b-1de6da5a58f7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [3b3f2b07-d9d5-4809-b36b-1de6da5a58f7] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.003194522s
addons_test.go:572: (dbg) Run:  kubectl --context addons-757656 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-757656 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-757656 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-757656 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-757656 delete pod task-pv-pod: (1.003412999s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-757656 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-757656 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757656 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757656 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757656 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757656 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757656 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757656 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757656 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757656 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757656 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757656 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757656 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757656 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757656 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757656 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757656 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757656 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757656 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757656 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757656 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757656 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757656 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-757656 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [828ba3a4-ce64-4049-90e1-7d3ead7c12a4] Pending
helpers_test.go:352: "task-pv-pod-restore" [828ba3a4-ce64-4049-90e1-7d3ead7c12a4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [828ba3a4-ce64-4049-90e1-7d3ead7c12a4] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003679561s
addons_test.go:614: (dbg) Run:  kubectl --context addons-757656 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-757656 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-757656 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-757656 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-757656 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (231.447463ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 08:32:56.518888   23418 out.go:360] Setting OutFile to fd 1 ...
	I1018 08:32:56.519192   23418 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:32:56.519203   23418 out.go:374] Setting ErrFile to fd 2...
	I1018 08:32:56.519209   23418 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:32:56.519461   23418 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-5897/.minikube/bin
	I1018 08:32:56.519744   23418 mustload.go:65] Loading cluster: addons-757656
	I1018 08:32:56.520092   23418 config.go:182] Loaded profile config "addons-757656": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:32:56.520111   23418 addons.go:606] checking whether the cluster is paused
	I1018 08:32:56.520207   23418 config.go:182] Loaded profile config "addons-757656": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:32:56.520228   23418 host.go:66] Checking if "addons-757656" exists ...
	I1018 08:32:56.520679   23418 cli_runner.go:164] Run: docker container inspect addons-757656 --format={{.State.Status}}
	I1018 08:32:56.539186   23418 ssh_runner.go:195] Run: systemctl --version
	I1018 08:32:56.539254   23418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:32:56.557677   23418 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/addons-757656/id_rsa Username:docker}
	I1018 08:32:56.654136   23418 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 08:32:56.654226   23418 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 08:32:56.683426   23418 cri.go:89] found id: "4790d2a16058f3034a1b1e4ae855894aab7d7f3d7c610e86af4396f6d3498080"
	I1018 08:32:56.683446   23418 cri.go:89] found id: "48cc0cbf614ba386328b79cd306ce1fd90f8b4c338b8eb054421e5183efc5d4e"
	I1018 08:32:56.683451   23418 cri.go:89] found id: "c8379247a51b8242f6cb2cd6503d43ea0ed66dd9900fc8728b601695286a1d0a"
	I1018 08:32:56.683455   23418 cri.go:89] found id: "941a165621387aecf9f61fa5f0858b119aa2452338edbbe4ffbe1cff9b72292f"
	I1018 08:32:56.683458   23418 cri.go:89] found id: "14afe1f1816da18c2ce04153131d2aa122c50659a7faa8f9e40544d725a3d2c7"
	I1018 08:32:56.683461   23418 cri.go:89] found id: "440454c3da4c0302b146f97ed6c6f0e44df0c561a5f8d848e7e81218f08ef6db"
	I1018 08:32:56.683464   23418 cri.go:89] found id: "bbb9cb4aa33f4e4e42c90d9f8b44b4c8f4c50b6a89edc5b1893c52e37b664fed"
	I1018 08:32:56.683466   23418 cri.go:89] found id: "0ad5891c05dff52f7d29bd1edd32ab0a01ccc280a8974a244ec73419bd21a831"
	I1018 08:32:56.683469   23418 cri.go:89] found id: "f6e03a69b7bc41d32daae0fea75627f3e6bab34641aba500b0deec44241fa209"
	I1018 08:32:56.683474   23418 cri.go:89] found id: "4171876174cfa4f01c139bc1155ba660392b57736128ebf7bc1dca331bbcaee4"
	I1018 08:32:56.683477   23418 cri.go:89] found id: "813e46f6ecd6f3f0ed03b73a97ae5413d8bb65920271777404da143c0e902755"
	I1018 08:32:56.683479   23418 cri.go:89] found id: "ab63780aacfa0fb8341e0e937c7631e7eaf6c63690759abd7a5b64b2e83e3368"
	I1018 08:32:56.683482   23418 cri.go:89] found id: "72deecf66bdbeb5deb3d6951223a20fdac95c3fa8a32985ef6454a42357402e1"
	I1018 08:32:56.683493   23418 cri.go:89] found id: "cd1b8704f38dd764c4b72252086f2f53b2d98dc57ff0adbe8d204888170e994c"
	I1018 08:32:56.683506   23418 cri.go:89] found id: "c08a0e9528c61fd1b79ac94a2160e7ee63bc34d3e75541f78b2b3b9028daa8e1"
	I1018 08:32:56.683513   23418 cri.go:89] found id: "c216c132bff884542e58d26afd8de4c6ca39c00b23e91a35690a66be1be95c45"
	I1018 08:32:56.683516   23418 cri.go:89] found id: "dbf5c6e8579fb377ddb314d3b43db1406eed33b898a34570445f3dbda1c63266"
	I1018 08:32:56.683519   23418 cri.go:89] found id: "7189be801872dca3adedf930d1116a7930ab711e9e199fc588f8ad5ec67c23c9"
	I1018 08:32:56.683522   23418 cri.go:89] found id: "6c971e87dacce692e9ba51b9df623358656653ca81349c910a48ee4deca9701c"
	I1018 08:32:56.683524   23418 cri.go:89] found id: "1511480aef50d5e66eab7e6f72a8a21ffb3a3ad656dc0ce5a729ee3afe26e9c7"
	I1018 08:32:56.683526   23418 cri.go:89] found id: "52adc977887b4b184292fed9d6952cb67b4fb667289dd8df966abb85c03aaa46"
	I1018 08:32:56.683529   23418 cri.go:89] found id: "4faa6d23dba1bdc8c7eba89649f47072c5b426937bf2a2b10aa4b52d39f44cf8"
	I1018 08:32:56.683531   23418 cri.go:89] found id: "717994737c9e9e736b5e73abe6513db6ce8ecf19404100a264fa9c13ee71f047"
	I1018 08:32:56.683533   23418 cri.go:89] found id: "56d69d63fccc147fc338479d722142a993f3013be2c188974a95d01e019bcb14"
	I1018 08:32:56.683535   23418 cri.go:89] found id: ""
	I1018 08:32:56.683572   23418 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 08:32:56.697525   23418 out.go:203] 
	W1018 08:32:56.698666   23418 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:32:56Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:32:56Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 08:32:56.698682   23418 out.go:285] * 
	* 
	W1018 08:32:56.701688   23418 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 08:32:56.702859   23418 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-757656 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-757656 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-757656 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (229.006523ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 08:32:56.751119   23480 out.go:360] Setting OutFile to fd 1 ...
	I1018 08:32:56.751401   23480 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:32:56.751408   23480 out.go:374] Setting ErrFile to fd 2...
	I1018 08:32:56.751413   23480 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:32:56.751627   23480 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-5897/.minikube/bin
	I1018 08:32:56.751871   23480 mustload.go:65] Loading cluster: addons-757656
	I1018 08:32:56.752226   23480 config.go:182] Loaded profile config "addons-757656": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:32:56.752243   23480 addons.go:606] checking whether the cluster is paused
	I1018 08:32:56.752328   23480 config.go:182] Loaded profile config "addons-757656": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:32:56.752353   23480 host.go:66] Checking if "addons-757656" exists ...
	I1018 08:32:56.752716   23480 cli_runner.go:164] Run: docker container inspect addons-757656 --format={{.State.Status}}
	I1018 08:32:56.770808   23480 ssh_runner.go:195] Run: systemctl --version
	I1018 08:32:56.770868   23480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:32:56.789086   23480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/addons-757656/id_rsa Username:docker}
	I1018 08:32:56.883961   23480 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 08:32:56.884042   23480 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 08:32:56.913270   23480 cri.go:89] found id: "4790d2a16058f3034a1b1e4ae855894aab7d7f3d7c610e86af4396f6d3498080"
	I1018 08:32:56.913289   23480 cri.go:89] found id: "48cc0cbf614ba386328b79cd306ce1fd90f8b4c338b8eb054421e5183efc5d4e"
	I1018 08:32:56.913293   23480 cri.go:89] found id: "c8379247a51b8242f6cb2cd6503d43ea0ed66dd9900fc8728b601695286a1d0a"
	I1018 08:32:56.913296   23480 cri.go:89] found id: "941a165621387aecf9f61fa5f0858b119aa2452338edbbe4ffbe1cff9b72292f"
	I1018 08:32:56.913299   23480 cri.go:89] found id: "14afe1f1816da18c2ce04153131d2aa122c50659a7faa8f9e40544d725a3d2c7"
	I1018 08:32:56.913302   23480 cri.go:89] found id: "440454c3da4c0302b146f97ed6c6f0e44df0c561a5f8d848e7e81218f08ef6db"
	I1018 08:32:56.913304   23480 cri.go:89] found id: "bbb9cb4aa33f4e4e42c90d9f8b44b4c8f4c50b6a89edc5b1893c52e37b664fed"
	I1018 08:32:56.913307   23480 cri.go:89] found id: "0ad5891c05dff52f7d29bd1edd32ab0a01ccc280a8974a244ec73419bd21a831"
	I1018 08:32:56.913309   23480 cri.go:89] found id: "f6e03a69b7bc41d32daae0fea75627f3e6bab34641aba500b0deec44241fa209"
	I1018 08:32:56.913321   23480 cri.go:89] found id: "4171876174cfa4f01c139bc1155ba660392b57736128ebf7bc1dca331bbcaee4"
	I1018 08:32:56.913323   23480 cri.go:89] found id: "813e46f6ecd6f3f0ed03b73a97ae5413d8bb65920271777404da143c0e902755"
	I1018 08:32:56.913326   23480 cri.go:89] found id: "ab63780aacfa0fb8341e0e937c7631e7eaf6c63690759abd7a5b64b2e83e3368"
	I1018 08:32:56.913328   23480 cri.go:89] found id: "72deecf66bdbeb5deb3d6951223a20fdac95c3fa8a32985ef6454a42357402e1"
	I1018 08:32:56.913331   23480 cri.go:89] found id: "cd1b8704f38dd764c4b72252086f2f53b2d98dc57ff0adbe8d204888170e994c"
	I1018 08:32:56.913333   23480 cri.go:89] found id: "c08a0e9528c61fd1b79ac94a2160e7ee63bc34d3e75541f78b2b3b9028daa8e1"
	I1018 08:32:56.913361   23480 cri.go:89] found id: "c216c132bff884542e58d26afd8de4c6ca39c00b23e91a35690a66be1be95c45"
	I1018 08:32:56.913366   23480 cri.go:89] found id: "dbf5c6e8579fb377ddb314d3b43db1406eed33b898a34570445f3dbda1c63266"
	I1018 08:32:56.913373   23480 cri.go:89] found id: "7189be801872dca3adedf930d1116a7930ab711e9e199fc588f8ad5ec67c23c9"
	I1018 08:32:56.913377   23480 cri.go:89] found id: "6c971e87dacce692e9ba51b9df623358656653ca81349c910a48ee4deca9701c"
	I1018 08:32:56.913394   23480 cri.go:89] found id: "1511480aef50d5e66eab7e6f72a8a21ffb3a3ad656dc0ce5a729ee3afe26e9c7"
	I1018 08:32:56.913398   23480 cri.go:89] found id: "52adc977887b4b184292fed9d6952cb67b4fb667289dd8df966abb85c03aaa46"
	I1018 08:32:56.913405   23480 cri.go:89] found id: "4faa6d23dba1bdc8c7eba89649f47072c5b426937bf2a2b10aa4b52d39f44cf8"
	I1018 08:32:56.913409   23480 cri.go:89] found id: "717994737c9e9e736b5e73abe6513db6ce8ecf19404100a264fa9c13ee71f047"
	I1018 08:32:56.913417   23480 cri.go:89] found id: "56d69d63fccc147fc338479d722142a993f3013be2c188974a95d01e019bcb14"
	I1018 08:32:56.913420   23480 cri.go:89] found id: ""
	I1018 08:32:56.913460   23480 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 08:32:56.926848   23480 out.go:203] 
	W1018 08:32:56.928011   23480 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:32:56Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:32:56Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 08:32:56.928029   23480 out.go:285] * 
	* 
	W1018 08:32:56.931050   23480 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 08:32:56.932432   23480 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-757656 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (57.32s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.55s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-757656 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-757656 --alsologtostderr -v=1: exit status 11 (233.402581ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 08:31:57.113236   19240 out.go:360] Setting OutFile to fd 1 ...
	I1018 08:31:57.113573   19240 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:31:57.113584   19240 out.go:374] Setting ErrFile to fd 2...
	I1018 08:31:57.113589   19240 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:31:57.113802   19240 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-5897/.minikube/bin
	I1018 08:31:57.114083   19240 mustload.go:65] Loading cluster: addons-757656
	I1018 08:31:57.114451   19240 config.go:182] Loaded profile config "addons-757656": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:31:57.114469   19240 addons.go:606] checking whether the cluster is paused
	I1018 08:31:57.114584   19240 config.go:182] Loaded profile config "addons-757656": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:31:57.114599   19240 host.go:66] Checking if "addons-757656" exists ...
	I1018 08:31:57.114974   19240 cli_runner.go:164] Run: docker container inspect addons-757656 --format={{.State.Status}}
	I1018 08:31:57.133289   19240 ssh_runner.go:195] Run: systemctl --version
	I1018 08:31:57.133362   19240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:31:57.151651   19240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/addons-757656/id_rsa Username:docker}
	I1018 08:31:57.246969   19240 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 08:31:57.247041   19240 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 08:31:57.276338   19240 cri.go:89] found id: "4790d2a16058f3034a1b1e4ae855894aab7d7f3d7c610e86af4396f6d3498080"
	I1018 08:31:57.276373   19240 cri.go:89] found id: "48cc0cbf614ba386328b79cd306ce1fd90f8b4c338b8eb054421e5183efc5d4e"
	I1018 08:31:57.276379   19240 cri.go:89] found id: "c8379247a51b8242f6cb2cd6503d43ea0ed66dd9900fc8728b601695286a1d0a"
	I1018 08:31:57.276385   19240 cri.go:89] found id: "941a165621387aecf9f61fa5f0858b119aa2452338edbbe4ffbe1cff9b72292f"
	I1018 08:31:57.276389   19240 cri.go:89] found id: "14afe1f1816da18c2ce04153131d2aa122c50659a7faa8f9e40544d725a3d2c7"
	I1018 08:31:57.276393   19240 cri.go:89] found id: "440454c3da4c0302b146f97ed6c6f0e44df0c561a5f8d848e7e81218f08ef6db"
	I1018 08:31:57.276396   19240 cri.go:89] found id: "bbb9cb4aa33f4e4e42c90d9f8b44b4c8f4c50b6a89edc5b1893c52e37b664fed"
	I1018 08:31:57.276400   19240 cri.go:89] found id: "0ad5891c05dff52f7d29bd1edd32ab0a01ccc280a8974a244ec73419bd21a831"
	I1018 08:31:57.276405   19240 cri.go:89] found id: "f6e03a69b7bc41d32daae0fea75627f3e6bab34641aba500b0deec44241fa209"
	I1018 08:31:57.276419   19240 cri.go:89] found id: "4171876174cfa4f01c139bc1155ba660392b57736128ebf7bc1dca331bbcaee4"
	I1018 08:31:57.276426   19240 cri.go:89] found id: "813e46f6ecd6f3f0ed03b73a97ae5413d8bb65920271777404da143c0e902755"
	I1018 08:31:57.276431   19240 cri.go:89] found id: "ab63780aacfa0fb8341e0e937c7631e7eaf6c63690759abd7a5b64b2e83e3368"
	I1018 08:31:57.276439   19240 cri.go:89] found id: "72deecf66bdbeb5deb3d6951223a20fdac95c3fa8a32985ef6454a42357402e1"
	I1018 08:31:57.276442   19240 cri.go:89] found id: "cd1b8704f38dd764c4b72252086f2f53b2d98dc57ff0adbe8d204888170e994c"
	I1018 08:31:57.276445   19240 cri.go:89] found id: "c08a0e9528c61fd1b79ac94a2160e7ee63bc34d3e75541f78b2b3b9028daa8e1"
	I1018 08:31:57.276449   19240 cri.go:89] found id: "c216c132bff884542e58d26afd8de4c6ca39c00b23e91a35690a66be1be95c45"
	I1018 08:31:57.276452   19240 cri.go:89] found id: "dbf5c6e8579fb377ddb314d3b43db1406eed33b898a34570445f3dbda1c63266"
	I1018 08:31:57.276457   19240 cri.go:89] found id: "7189be801872dca3adedf930d1116a7930ab711e9e199fc588f8ad5ec67c23c9"
	I1018 08:31:57.276459   19240 cri.go:89] found id: "6c971e87dacce692e9ba51b9df623358656653ca81349c910a48ee4deca9701c"
	I1018 08:31:57.276462   19240 cri.go:89] found id: "1511480aef50d5e66eab7e6f72a8a21ffb3a3ad656dc0ce5a729ee3afe26e9c7"
	I1018 08:31:57.276464   19240 cri.go:89] found id: "52adc977887b4b184292fed9d6952cb67b4fb667289dd8df966abb85c03aaa46"
	I1018 08:31:57.276466   19240 cri.go:89] found id: "4faa6d23dba1bdc8c7eba89649f47072c5b426937bf2a2b10aa4b52d39f44cf8"
	I1018 08:31:57.276469   19240 cri.go:89] found id: "717994737c9e9e736b5e73abe6513db6ce8ecf19404100a264fa9c13ee71f047"
	I1018 08:31:57.276471   19240 cri.go:89] found id: "56d69d63fccc147fc338479d722142a993f3013be2c188974a95d01e019bcb14"
	I1018 08:31:57.276473   19240 cri.go:89] found id: ""
	I1018 08:31:57.276519   19240 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 08:31:57.290858   19240 out.go:203] 
	W1018 08:31:57.292428   19240 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:31:57Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:31:57Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 08:31:57.292452   19240 out.go:285] * 
	* 
	W1018 08:31:57.295554   19240 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 08:31:57.297044   19240 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-757656 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-757656
helpers_test.go:243: (dbg) docker inspect addons-757656:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "df669ff7ec7e833eb29b74b0e3b95910965d3c06c3a09ea38921298da52bcf45",
	        "Created": "2025-10-18T08:29:48.229528523Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 11387,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T08:29:48.271883206Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/df669ff7ec7e833eb29b74b0e3b95910965d3c06c3a09ea38921298da52bcf45/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/df669ff7ec7e833eb29b74b0e3b95910965d3c06c3a09ea38921298da52bcf45/hostname",
	        "HostsPath": "/var/lib/docker/containers/df669ff7ec7e833eb29b74b0e3b95910965d3c06c3a09ea38921298da52bcf45/hosts",
	        "LogPath": "/var/lib/docker/containers/df669ff7ec7e833eb29b74b0e3b95910965d3c06c3a09ea38921298da52bcf45/df669ff7ec7e833eb29b74b0e3b95910965d3c06c3a09ea38921298da52bcf45-json.log",
	        "Name": "/addons-757656",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-757656:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-757656",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "df669ff7ec7e833eb29b74b0e3b95910965d3c06c3a09ea38921298da52bcf45",
	                "LowerDir": "/var/lib/docker/overlay2/ec7921e7b6e429da7803d599e184f18672b75401cf485407ffe907779d476778-init/diff:/var/lib/docker/overlay2/76f783f469ac4c930bc111d7df4bd2b3a57bdcd762971c7ce0ba7a7b959771a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ec7921e7b6e429da7803d599e184f18672b75401cf485407ffe907779d476778/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ec7921e7b6e429da7803d599e184f18672b75401cf485407ffe907779d476778/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ec7921e7b6e429da7803d599e184f18672b75401cf485407ffe907779d476778/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-757656",
	                "Source": "/var/lib/docker/volumes/addons-757656/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-757656",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-757656",
	                "name.minikube.sigs.k8s.io": "addons-757656",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b47e490d1f8f9ee24203224605c22aaebaa70dd6240f5bf5cda00a52e2183a36",
	            "SandboxKey": "/var/run/docker/netns/b47e490d1f8f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-757656": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:a0:b6:7c:53:e7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e1527547d992144c63156fafd65c37b1dece89a9ba9e6ee31e056182fd935ba2",
	                    "EndpointID": "9f2c5917b57aa4f7c58b5f3d017b52c25c9e69a61ef1707c3c958144638b4934",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-757656",
	                        "df669ff7ec7e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-757656 -n addons-757656
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-757656 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-757656 logs -n 25: (1.165082897s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-746820 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-746820   │ jenkins │ v1.37.0 │ 18 Oct 25 08:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 18 Oct 25 08:29 UTC │ 18 Oct 25 08:29 UTC │
	│ delete  │ -p download-only-746820                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-746820   │ jenkins │ v1.37.0 │ 18 Oct 25 08:29 UTC │ 18 Oct 25 08:29 UTC │
	│ start   │ -o=json --download-only -p download-only-330759 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-330759   │ jenkins │ v1.37.0 │ 18 Oct 25 08:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 18 Oct 25 08:29 UTC │ 18 Oct 25 08:29 UTC │
	│ delete  │ -p download-only-330759                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-330759   │ jenkins │ v1.37.0 │ 18 Oct 25 08:29 UTC │ 18 Oct 25 08:29 UTC │
	│ delete  │ -p download-only-746820                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-746820   │ jenkins │ v1.37.0 │ 18 Oct 25 08:29 UTC │ 18 Oct 25 08:29 UTC │
	│ delete  │ -p download-only-330759                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-330759   │ jenkins │ v1.37.0 │ 18 Oct 25 08:29 UTC │ 18 Oct 25 08:29 UTC │
	│ start   │ --download-only -p download-docker-215465 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-215465 │ jenkins │ v1.37.0 │ 18 Oct 25 08:29 UTC │                     │
	│ delete  │ -p download-docker-215465                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-215465 │ jenkins │ v1.37.0 │ 18 Oct 25 08:29 UTC │ 18 Oct 25 08:29 UTC │
	│ start   │ --download-only -p binary-mirror-658787 --alsologtostderr --binary-mirror http://127.0.0.1:33031 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-658787   │ jenkins │ v1.37.0 │ 18 Oct 25 08:29 UTC │                     │
	│ delete  │ -p binary-mirror-658787                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-658787   │ jenkins │ v1.37.0 │ 18 Oct 25 08:29 UTC │ 18 Oct 25 08:29 UTC │
	│ addons  │ disable dashboard -p addons-757656                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-757656          │ jenkins │ v1.37.0 │ 18 Oct 25 08:29 UTC │                     │
	│ addons  │ enable dashboard -p addons-757656                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-757656          │ jenkins │ v1.37.0 │ 18 Oct 25 08:29 UTC │                     │
	│ start   │ -p addons-757656 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-757656          │ jenkins │ v1.37.0 │ 18 Oct 25 08:29 UTC │ 18 Oct 25 08:31 UTC │
	│ addons  │ addons-757656 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-757656          │ jenkins │ v1.37.0 │ 18 Oct 25 08:31 UTC │                     │
	│ addons  │ addons-757656 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-757656          │ jenkins │ v1.37.0 │ 18 Oct 25 08:31 UTC │                     │
	│ addons  │ enable headlamp -p addons-757656 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-757656          │ jenkins │ v1.37.0 │ 18 Oct 25 08:31 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 08:29:24
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 08:29:24.093914   10741 out.go:360] Setting OutFile to fd 1 ...
	I1018 08:29:24.094049   10741 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:29:24.094061   10741 out.go:374] Setting ErrFile to fd 2...
	I1018 08:29:24.094068   10741 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:29:24.094259   10741 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-5897/.minikube/bin
	I1018 08:29:24.094808   10741 out.go:368] Setting JSON to false
	I1018 08:29:24.095583   10741 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":712,"bootTime":1760775452,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 08:29:24.095662   10741 start.go:141] virtualization: kvm guest
	I1018 08:29:24.097700   10741 out.go:179] * [addons-757656] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 08:29:24.099157   10741 out.go:179]   - MINIKUBE_LOCATION=21767
	I1018 08:29:24.099160   10741 notify.go:220] Checking for updates...
	I1018 08:29:24.101888   10741 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 08:29:24.103369   10741 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-5897/kubeconfig
	I1018 08:29:24.104735   10741 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-5897/.minikube
	I1018 08:29:24.106062   10741 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 08:29:24.107350   10741 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 08:29:24.108610   10741 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 08:29:24.130454   10741 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 08:29:24.130551   10741 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 08:29:24.187500   10741 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-10-18 08:29:24.177828343 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 08:29:24.187614   10741 docker.go:318] overlay module found
	I1018 08:29:24.189982   10741 out.go:179] * Using the docker driver based on user configuration
	I1018 08:29:24.191310   10741 start.go:305] selected driver: docker
	I1018 08:29:24.191330   10741 start.go:925] validating driver "docker" against <nil>
	I1018 08:29:24.191371   10741 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 08:29:24.191925   10741 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 08:29:24.245550   10741 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-10-18 08:29:24.23642069 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 08:29:24.245698   10741 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 08:29:24.245923   10741 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 08:29:24.247823   10741 out.go:179] * Using Docker driver with root privileges
	I1018 08:29:24.249111   10741 cni.go:84] Creating CNI manager for ""
	I1018 08:29:24.249175   10741 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 08:29:24.249186   10741 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 08:29:24.249276   10741 start.go:349] cluster config:
	{Name:addons-757656 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-757656 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1018 08:29:24.250912   10741 out.go:179] * Starting "addons-757656" primary control-plane node in "addons-757656" cluster
	I1018 08:29:24.252082   10741 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 08:29:24.253486   10741 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 08:29:24.254844   10741 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 08:29:24.254880   10741 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 08:29:24.254887   10741 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-5897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 08:29:24.254896   10741 cache.go:58] Caching tarball of preloaded images
	I1018 08:29:24.254991   10741 preload.go:233] Found /home/jenkins/minikube-integration/21767-5897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 08:29:24.255006   10741 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 08:29:24.255418   10741 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/config.json ...
	I1018 08:29:24.255446   10741 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/config.json: {Name:mk554a6a07222424ec37abcb218df63c14178bc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:29:24.271705   10741 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1018 08:29:24.271897   10741 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory
	I1018 08:29:24.271920   10741 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory, skipping pull
	I1018 08:29:24.271926   10741 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in cache, skipping pull
	I1018 08:29:24.271938   10741 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 as a tarball
	I1018 08:29:24.271948   10741 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from local cache
	I1018 08:29:36.474287   10741 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from cached tarball
	I1018 08:29:36.474329   10741 cache.go:232] Successfully downloaded all kic artifacts
	I1018 08:29:36.474376   10741 start.go:360] acquireMachinesLock for addons-757656: {Name:mkc2473273a000321588bf99eb2b2fb8faac67ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 08:29:36.474481   10741 start.go:364] duration metric: took 84.004µs to acquireMachinesLock for "addons-757656"
	I1018 08:29:36.474511   10741 start.go:93] Provisioning new machine with config: &{Name:addons-757656 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-757656 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 08:29:36.474580   10741 start.go:125] createHost starting for "" (driver="docker")
	I1018 08:29:36.476411   10741 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1018 08:29:36.476608   10741 start.go:159] libmachine.API.Create for "addons-757656" (driver="docker")
	I1018 08:29:36.476638   10741 client.go:168] LocalClient.Create starting
	I1018 08:29:36.476754   10741 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem
	I1018 08:29:36.576204   10741 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem
	I1018 08:29:36.824033   10741 cli_runner.go:164] Run: docker network inspect addons-757656 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 08:29:36.840911   10741 cli_runner.go:211] docker network inspect addons-757656 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 08:29:36.840985   10741 network_create.go:284] running [docker network inspect addons-757656] to gather additional debugging logs...
	I1018 08:29:36.841004   10741 cli_runner.go:164] Run: docker network inspect addons-757656
	W1018 08:29:36.857274   10741 cli_runner.go:211] docker network inspect addons-757656 returned with exit code 1
	I1018 08:29:36.857301   10741 network_create.go:287] error running [docker network inspect addons-757656]: docker network inspect addons-757656: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-757656 not found
	I1018 08:29:36.857316   10741 network_create.go:289] output of [docker network inspect addons-757656]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-757656 not found
	
	** /stderr **
	I1018 08:29:36.857464   10741 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 08:29:36.874425   10741 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ce8620}
	I1018 08:29:36.874467   10741 network_create.go:124] attempt to create docker network addons-757656 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1018 08:29:36.874522   10741 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-757656 addons-757656
	I1018 08:29:36.929763   10741 network_create.go:108] docker network addons-757656 192.168.49.0/24 created
	I1018 08:29:36.929789   10741 kic.go:121] calculated static IP "192.168.49.2" for the "addons-757656" container
	I1018 08:29:36.929855   10741 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 08:29:36.946732   10741 cli_runner.go:164] Run: docker volume create addons-757656 --label name.minikube.sigs.k8s.io=addons-757656 --label created_by.minikube.sigs.k8s.io=true
	I1018 08:29:36.964867   10741 oci.go:103] Successfully created a docker volume addons-757656
	I1018 08:29:36.964944   10741 cli_runner.go:164] Run: docker run --rm --name addons-757656-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-757656 --entrypoint /usr/bin/test -v addons-757656:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 08:29:43.616506   10741 cli_runner.go:217] Completed: docker run --rm --name addons-757656-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-757656 --entrypoint /usr/bin/test -v addons-757656:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib: (6.651520905s)
	I1018 08:29:43.616534   10741 oci.go:107] Successfully prepared a docker volume addons-757656
	I1018 08:29:43.616561   10741 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 08:29:43.616584   10741 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 08:29:43.616647   10741 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-5897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-757656:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1018 08:29:48.154088   10741 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-5897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-757656:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.537376023s)
	I1018 08:29:48.154117   10741 kic.go:203] duration metric: took 4.537533415s to extract preloaded images to volume ...
	W1018 08:29:48.154192   10741 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1018 08:29:48.154219   10741 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1018 08:29:48.154250   10741 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 08:29:48.213612   10741 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-757656 --name addons-757656 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-757656 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-757656 --network addons-757656 --ip 192.168.49.2 --volume addons-757656:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 08:29:48.523840   10741 cli_runner.go:164] Run: docker container inspect addons-757656 --format={{.State.Running}}
	I1018 08:29:48.544024   10741 cli_runner.go:164] Run: docker container inspect addons-757656 --format={{.State.Status}}
	I1018 08:29:48.563155   10741 cli_runner.go:164] Run: docker exec addons-757656 stat /var/lib/dpkg/alternatives/iptables
	I1018 08:29:48.611164   10741 oci.go:144] the created container "addons-757656" has a running status.
	I1018 08:29:48.611194   10741 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21767-5897/.minikube/machines/addons-757656/id_rsa...
	I1018 08:29:48.856905   10741 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21767-5897/.minikube/machines/addons-757656/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 08:29:48.891419   10741 cli_runner.go:164] Run: docker container inspect addons-757656 --format={{.State.Status}}
	I1018 08:29:48.912598   10741 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 08:29:48.912618   10741 kic_runner.go:114] Args: [docker exec --privileged addons-757656 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 08:29:48.965154   10741 cli_runner.go:164] Run: docker container inspect addons-757656 --format={{.State.Status}}
	I1018 08:29:48.985009   10741 machine.go:93] provisionDockerMachine start ...
	I1018 08:29:48.985083   10741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:29:49.005164   10741 main.go:141] libmachine: Using SSH client type: native
	I1018 08:29:49.005493   10741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1018 08:29:49.005510   10741 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 08:29:49.138104   10741 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-757656
	
	I1018 08:29:49.138130   10741 ubuntu.go:182] provisioning hostname "addons-757656"
	I1018 08:29:49.138193   10741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:29:49.157056   10741 main.go:141] libmachine: Using SSH client type: native
	I1018 08:29:49.157269   10741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1018 08:29:49.157286   10741 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-757656 && echo "addons-757656" | sudo tee /etc/hostname
	I1018 08:29:49.298773   10741 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-757656
	
	I1018 08:29:49.298846   10741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:29:49.316764   10741 main.go:141] libmachine: Using SSH client type: native
	I1018 08:29:49.316985   10741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1018 08:29:49.317005   10741 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-757656' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-757656/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-757656' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 08:29:49.448892   10741 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 08:29:49.448921   10741 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-5897/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-5897/.minikube}
	I1018 08:29:49.448957   10741 ubuntu.go:190] setting up certificates
	I1018 08:29:49.448974   10741 provision.go:84] configureAuth start
	I1018 08:29:49.449022   10741 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-757656
	I1018 08:29:49.466987   10741 provision.go:143] copyHostCerts
	I1018 08:29:49.467071   10741 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem (1078 bytes)
	I1018 08:29:49.467208   10741 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem (1123 bytes)
	I1018 08:29:49.467293   10741 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem (1675 bytes)
	I1018 08:29:49.467383   10741 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem org=jenkins.addons-757656 san=[127.0.0.1 192.168.49.2 addons-757656 localhost minikube]
	I1018 08:29:49.557013   10741 provision.go:177] copyRemoteCerts
	I1018 08:29:49.557068   10741 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 08:29:49.557102   10741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:29:49.575013   10741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/addons-757656/id_rsa Username:docker}
	I1018 08:29:49.670675   10741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 08:29:49.689905   10741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1018 08:29:49.707122   10741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 08:29:49.723960   10741 provision.go:87] duration metric: took 274.97423ms to configureAuth
	I1018 08:29:49.723989   10741 ubuntu.go:206] setting minikube options for container-runtime
	I1018 08:29:49.724161   10741 config.go:182] Loaded profile config "addons-757656": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:29:49.724269   10741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:29:49.741964   10741 main.go:141] libmachine: Using SSH client type: native
	I1018 08:29:49.742239   10741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1018 08:29:49.742265   10741 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 08:29:49.982807   10741 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 08:29:49.982829   10741 machine.go:96] duration metric: took 997.800508ms to provisionDockerMachine
	I1018 08:29:49.982839   10741 client.go:171] duration metric: took 13.506192733s to LocalClient.Create
	I1018 08:29:49.982853   10741 start.go:167] duration metric: took 13.506246522s to libmachine.API.Create "addons-757656"
	I1018 08:29:49.982860   10741 start.go:293] postStartSetup for "addons-757656" (driver="docker")
	I1018 08:29:49.982873   10741 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 08:29:49.982927   10741 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 08:29:49.982973   10741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:29:50.000659   10741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/addons-757656/id_rsa Username:docker}
	I1018 08:29:50.100064   10741 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 08:29:50.103735   10741 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 08:29:50.103761   10741 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 08:29:50.103774   10741 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-5897/.minikube/addons for local assets ...
	I1018 08:29:50.103843   10741 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-5897/.minikube/files for local assets ...
	I1018 08:29:50.103870   10741 start.go:296] duration metric: took 121.003691ms for postStartSetup
	I1018 08:29:50.104291   10741 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-757656
	I1018 08:29:50.121985   10741 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/config.json ...
	I1018 08:29:50.122240   10741 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 08:29:50.122279   10741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:29:50.139796   10741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/addons-757656/id_rsa Username:docker}
	I1018 08:29:50.232289   10741 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 08:29:50.236939   10741 start.go:128] duration metric: took 13.762347082s to createHost
	I1018 08:29:50.236957   10741 start.go:83] releasing machines lock for "addons-757656", held for 13.762463086s
	I1018 08:29:50.237008   10741 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-757656
	I1018 08:29:50.254404   10741 ssh_runner.go:195] Run: cat /version.json
	I1018 08:29:50.254446   10741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:29:50.254490   10741 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 08:29:50.254545   10741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:29:50.273912   10741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/addons-757656/id_rsa Username:docker}
	I1018 08:29:50.273925   10741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/addons-757656/id_rsa Username:docker}
	I1018 08:29:50.365224   10741 ssh_runner.go:195] Run: systemctl --version
	I1018 08:29:50.421536   10741 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 08:29:50.455321   10741 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 08:29:50.459890   10741 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 08:29:50.459961   10741 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 08:29:50.484894   10741 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1018 08:29:50.484914   10741 start.go:495] detecting cgroup driver to use...
	I1018 08:29:50.484942   10741 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 08:29:50.484979   10741 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 08:29:50.500755   10741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 08:29:50.513157   10741 docker.go:218] disabling cri-docker service (if available) ...
	I1018 08:29:50.513207   10741 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 08:29:50.529683   10741 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 08:29:50.547058   10741 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 08:29:50.629978   10741 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 08:29:50.716529   10741 docker.go:234] disabling docker service ...
	I1018 08:29:50.716593   10741 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 08:29:50.734187   10741 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 08:29:50.746944   10741 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 08:29:50.828933   10741 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 08:29:50.908510   10741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 08:29:50.920572   10741 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 08:29:50.934308   10741 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 08:29:50.934391   10741 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 08:29:50.944610   10741 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 08:29:50.944666   10741 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 08:29:50.954214   10741 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 08:29:50.962850   10741 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 08:29:50.971381   10741 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 08:29:50.979559   10741 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 08:29:50.987933   10741 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 08:29:51.000899   10741 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 08:29:51.009761   10741 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 08:29:51.017120   10741 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1018 08:29:51.017165   10741 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1018 08:29:51.029195   10741 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 08:29:51.036832   10741 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 08:29:51.113393   10741 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 08:29:51.209520   10741 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 08:29:51.209584   10741 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 08:29:51.213495   10741 start.go:563] Will wait 60s for crictl version
	I1018 08:29:51.213557   10741 ssh_runner.go:195] Run: which crictl
	I1018 08:29:51.217048   10741 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 08:29:51.241017   10741 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 08:29:51.241153   10741 ssh_runner.go:195] Run: crio --version
	I1018 08:29:51.267919   10741 ssh_runner.go:195] Run: crio --version
	I1018 08:29:51.296632   10741 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 08:29:51.297909   10741 cli_runner.go:164] Run: docker network inspect addons-757656 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 08:29:51.315942   10741 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1018 08:29:51.319960   10741 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 08:29:51.330156   10741 kubeadm.go:883] updating cluster {Name:addons-757656 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-757656 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 08:29:51.330289   10741 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 08:29:51.330396   10741 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 08:29:51.360372   10741 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 08:29:51.360391   10741 crio.go:433] Images already preloaded, skipping extraction
	I1018 08:29:51.360433   10741 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 08:29:51.386960   10741 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 08:29:51.386985   10741 cache_images.go:85] Images are preloaded, skipping loading
	I1018 08:29:51.386993   10741 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1018 08:29:51.387089   10741 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-757656 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-757656 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 08:29:51.387165   10741 ssh_runner.go:195] Run: crio config
	I1018 08:29:51.431508   10741 cni.go:84] Creating CNI manager for ""
	I1018 08:29:51.431532   10741 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 08:29:51.431548   10741 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 08:29:51.431567   10741 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-757656 NodeName:addons-757656 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 08:29:51.431678   10741 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-757656"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 08:29:51.431733   10741 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 08:29:51.439689   10741 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 08:29:51.439761   10741 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 08:29:51.447468   10741 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1018 08:29:51.460072   10741 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 08:29:51.475088   10741 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1018 08:29:51.487925   10741 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1018 08:29:51.491525   10741 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 08:29:51.501457   10741 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 08:29:51.579367   10741 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 08:29:51.605813   10741 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656 for IP: 192.168.49.2
	I1018 08:29:51.605834   10741 certs.go:195] generating shared ca certs ...
	I1018 08:29:51.605853   10741 certs.go:227] acquiring lock for ca certs: {Name:mk550b60d986fbbdf7b5e0015c56234b739f3162 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:29:51.605989   10741 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-5897/.minikube/ca.key
	I1018 08:29:51.827085   10741 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-5897/.minikube/ca.crt ...
	I1018 08:29:51.827115   10741 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/.minikube/ca.crt: {Name:mk28a5ba0a34efca8afa23abdcf9ad584c7103de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:29:51.827294   10741 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-5897/.minikube/ca.key ...
	I1018 08:29:51.827305   10741 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/.minikube/ca.key: {Name:mk2fe1cb6618b0f657685c882ef4773999853869 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:29:51.827405   10741 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.key
	I1018 08:29:51.867574   10741 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.crt ...
	I1018 08:29:51.867604   10741 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.crt: {Name:mk4e910ca84ebcb66150ba18f5dfb85c9254b593 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:29:51.867769   10741 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.key ...
	I1018 08:29:51.867781   10741 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.key: {Name:mk0293506165d04a11676a23b18a8df4817f4410 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:29:51.867851   10741 certs.go:257] generating profile certs ...
	I1018 08:29:51.867902   10741 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/client.key
	I1018 08:29:51.867916   10741 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/client.crt with IP's: []
	I1018 08:29:52.096824   10741 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/client.crt ...
	I1018 08:29:52.096854   10741 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/client.crt: {Name:mk9c4099f9162ba4e2a1492118f57f87701bc8c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:29:52.097040   10741 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/client.key ...
	I1018 08:29:52.097053   10741 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/client.key: {Name:mke4a978a87f2b4173a66dd618c8e26416d6b3a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:29:52.097125   10741 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/apiserver.key.7277db9b
	I1018 08:29:52.097153   10741 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/apiserver.crt.7277db9b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1018 08:29:52.269189   10741 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/apiserver.crt.7277db9b ...
	I1018 08:29:52.269229   10741 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/apiserver.crt.7277db9b: {Name:mkaa4ce119869d0402d8da221a60e8e2659b444a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:29:52.269395   10741 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/apiserver.key.7277db9b ...
	I1018 08:29:52.269408   10741 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/apiserver.key.7277db9b: {Name:mkd572150577d72d661f13406eee9a2c31731770 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:29:52.269489   10741 certs.go:382] copying /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/apiserver.crt.7277db9b -> /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/apiserver.crt
	I1018 08:29:52.269566   10741 certs.go:386] copying /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/apiserver.key.7277db9b -> /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/apiserver.key
	I1018 08:29:52.269618   10741 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/proxy-client.key
	I1018 08:29:52.269636   10741 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/proxy-client.crt with IP's: []
	I1018 08:29:52.384935   10741 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/proxy-client.crt ...
	I1018 08:29:52.384964   10741 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/proxy-client.crt: {Name:mkd7d0cff0b7dd9f28e2e44206989f9df30cab10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:29:52.385131   10741 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/proxy-client.key ...
	I1018 08:29:52.385143   10741 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/proxy-client.key: {Name:mk9045e5a26bc5acfac279609bef21ae5373c7c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:29:52.385324   10741 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 08:29:52.385368   10741 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem (1078 bytes)
	I1018 08:29:52.385393   10741 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem (1123 bytes)
	I1018 08:29:52.385414   10741 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem (1675 bytes)
	I1018 08:29:52.385922   10741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 08:29:52.403990   10741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 08:29:52.421369   10741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 08:29:52.438488   10741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 08:29:52.455171   10741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1018 08:29:52.472259   10741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 08:29:52.489309   10741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 08:29:52.506488   10741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 08:29:52.523670   10741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 08:29:52.542477   10741 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 08:29:52.554993   10741 ssh_runner.go:195] Run: openssl version
	I1018 08:29:52.561085   10741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 08:29:52.572057   10741 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 08:29:52.575837   10741 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1018 08:29:52.575900   10741 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 08:29:52.609611   10741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 08:29:52.618548   10741 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 08:29:52.622064   10741 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 08:29:52.622116   10741 kubeadm.go:400] StartCluster: {Name:addons-757656 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-757656 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 08:29:52.622196   10741 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 08:29:52.622272   10741 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 08:29:52.647426   10741 cri.go:89] found id: ""
	I1018 08:29:52.647500   10741 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 08:29:52.655400   10741 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 08:29:52.663328   10741 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 08:29:52.663444   10741 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 08:29:52.670965   10741 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 08:29:52.670985   10741 kubeadm.go:157] found existing configuration files:
	
	I1018 08:29:52.671030   10741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 08:29:52.678405   10741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 08:29:52.678455   10741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 08:29:52.685933   10741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 08:29:52.693498   10741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 08:29:52.693544   10741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 08:29:52.700831   10741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 08:29:52.708265   10741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 08:29:52.708331   10741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 08:29:52.715620   10741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 08:29:52.723144   10741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 08:29:52.723214   10741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 08:29:52.730606   10741 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 08:29:52.766540   10741 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 08:29:52.766620   10741 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 08:29:52.787577   10741 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 08:29:52.787652   10741 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1018 08:29:52.787693   10741 kubeadm.go:318] OS: Linux
	I1018 08:29:52.787734   10741 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 08:29:52.787802   10741 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 08:29:52.787872   10741 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 08:29:52.787943   10741 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 08:29:52.788014   10741 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 08:29:52.788088   10741 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 08:29:52.788161   10741 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 08:29:52.788232   10741 kubeadm.go:318] CGROUPS_IO: enabled
	I1018 08:29:52.842883   10741 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 08:29:52.843033   10741 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 08:29:52.843185   10741 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 08:29:52.850569   10741 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 08:29:52.852509   10741 out.go:252]   - Generating certificates and keys ...
	I1018 08:29:52.852638   10741 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 08:29:52.852737   10741 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 08:29:53.320852   10741 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 08:29:53.747842   10741 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 08:29:54.250527   10741 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 08:29:54.549979   10741 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 08:29:54.708478   10741 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 08:29:54.708607   10741 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-757656 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1018 08:29:54.753599   10741 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 08:29:54.753772   10741 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-757656 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1018 08:29:55.181818   10741 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 08:29:55.494491   10741 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 08:29:55.717872   10741 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 08:29:55.717937   10741 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 08:29:55.975586   10741 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 08:29:56.311297   10741 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 08:29:56.379102   10741 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 08:29:56.696014   10741 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 08:29:57.154904   10741 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 08:29:57.155360   10741 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 08:29:57.159451   10741 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 08:29:57.160840   10741 out.go:252]   - Booting up control plane ...
	I1018 08:29:57.160957   10741 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 08:29:57.161064   10741 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 08:29:57.161525   10741 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 08:29:57.174906   10741 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 08:29:57.175041   10741 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 08:29:57.181808   10741 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 08:29:57.182026   10741 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 08:29:57.182112   10741 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 08:29:57.280521   10741 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 08:29:57.280748   10741 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 08:29:58.281073   10741 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000734122s
	I1018 08:29:58.283810   10741 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 08:29:58.283945   10741 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1018 08:29:58.284075   10741 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 08:29:58.284193   10741 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 08:29:59.821569   10741 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.537650296s
	I1018 08:30:00.186887   10741 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 1.90300259s
	I1018 08:30:01.785832   10741 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 3.501975091s
	I1018 08:30:01.798590   10741 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 08:30:01.810692   10741 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 08:30:01.819762   10741 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 08:30:01.820029   10741 kubeadm.go:318] [mark-control-plane] Marking the node addons-757656 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 08:30:01.828425   10741 kubeadm.go:318] [bootstrap-token] Using token: j5k97x.1ffhdgaf3x41p7vg
	I1018 08:30:01.830052   10741 out.go:252]   - Configuring RBAC rules ...
	I1018 08:30:01.830179   10741 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 08:30:01.833640   10741 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 08:30:01.839023   10741 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 08:30:01.841477   10741 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 08:30:01.843880   10741 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 08:30:01.847485   10741 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 08:30:02.193073   10741 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 08:30:02.609987   10741 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 08:30:03.193295   10741 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 08:30:03.194097   10741 kubeadm.go:318] 
	I1018 08:30:03.194179   10741 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 08:30:03.194197   10741 kubeadm.go:318] 
	I1018 08:30:03.194320   10741 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 08:30:03.194334   10741 kubeadm.go:318] 
	I1018 08:30:03.194384   10741 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 08:30:03.194478   10741 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 08:30:03.194553   10741 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 08:30:03.194563   10741 kubeadm.go:318] 
	I1018 08:30:03.194639   10741 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 08:30:03.194651   10741 kubeadm.go:318] 
	I1018 08:30:03.194710   10741 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 08:30:03.194721   10741 kubeadm.go:318] 
	I1018 08:30:03.194770   10741 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 08:30:03.194834   10741 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 08:30:03.194896   10741 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 08:30:03.194918   10741 kubeadm.go:318] 
	I1018 08:30:03.194991   10741 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 08:30:03.195063   10741 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 08:30:03.195074   10741 kubeadm.go:318] 
	I1018 08:30:03.195144   10741 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token j5k97x.1ffhdgaf3x41p7vg \
	I1018 08:30:03.195234   10741 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:03f732b5d900f8eb7de41cf71a6356f3c4edf03d7a3795a959179e2391e7734f \
	I1018 08:30:03.195260   10741 kubeadm.go:318] 	--control-plane 
	I1018 08:30:03.195267   10741 kubeadm.go:318] 
	I1018 08:30:03.195338   10741 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 08:30:03.195366   10741 kubeadm.go:318] 
	I1018 08:30:03.195450   10741 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token j5k97x.1ffhdgaf3x41p7vg \
	I1018 08:30:03.195610   10741 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:03f732b5d900f8eb7de41cf71a6356f3c4edf03d7a3795a959179e2391e7734f 
	I1018 08:30:03.197879   10741 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1018 08:30:03.198042   10741 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 08:30:03.198082   10741 cni.go:84] Creating CNI manager for ""
	I1018 08:30:03.198097   10741 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 08:30:03.200045   10741 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 08:30:03.201256   10741 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 08:30:03.205587   10741 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 08:30:03.205603   10741 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 08:30:03.218621   10741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 08:30:03.418569   10741 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 08:30:03.418713   10741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:30:03.418749   10741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-757656 minikube.k8s.io/updated_at=2025_10_18T08_30_03_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2a39cecdc22b5fb611b15c7501c7459c3b4d2820 minikube.k8s.io/name=addons-757656 minikube.k8s.io/primary=true
	I1018 08:30:03.427864   10741 ops.go:34] apiserver oom_adj: -16
	I1018 08:30:03.497839   10741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:30:03.998546   10741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:30:04.498441   10741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:30:04.998074   10741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:30:05.498611   10741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:30:05.998008   10741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:30:06.498214   10741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:30:06.998558   10741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:30:07.498434   10741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:30:07.998557   10741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:30:08.061772   10741 kubeadm.go:1113] duration metric: took 4.643120777s to wait for elevateKubeSystemPrivileges
	I1018 08:30:08.061816   10741 kubeadm.go:402] duration metric: took 15.439702063s to StartCluster
	I1018 08:30:08.061839   10741 settings.go:142] acquiring lock: {Name:mk177870d6cf7000f95346d8b9c104ade730278a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:30:08.061968   10741 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-5897/kubeconfig
	I1018 08:30:08.062361   10741 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/kubeconfig: {Name:mkbdf22e0f6c3f9f36e4cde3352b620d43ace448 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:30:08.062569   10741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 08:30:08.062579   10741 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 08:30:08.062639   10741 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1018 08:30:08.062782   10741 addons.go:69] Setting yakd=true in profile "addons-757656"
	I1018 08:30:08.062802   10741 addons.go:69] Setting registry-creds=true in profile "addons-757656"
	I1018 08:30:08.062816   10741 addons.go:69] Setting storage-provisioner=true in profile "addons-757656"
	I1018 08:30:08.062819   10741 addons.go:69] Setting gcp-auth=true in profile "addons-757656"
	I1018 08:30:08.062828   10741 addons.go:238] Setting addon registry-creds=true in "addons-757656"
	I1018 08:30:08.062843   10741 addons.go:238] Setting addon storage-provisioner=true in "addons-757656"
	I1018 08:30:08.062859   10741 mustload.go:65] Loading cluster: addons-757656
	I1018 08:30:08.062871   10741 host.go:66] Checking if "addons-757656" exists ...
	I1018 08:30:08.062861   10741 addons.go:69] Setting default-storageclass=true in profile "addons-757656"
	I1018 08:30:08.062901   10741 addons.go:69] Setting registry=true in profile "addons-757656"
	I1018 08:30:08.062913   10741 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-757656"
	I1018 08:30:08.062899   10741 addons.go:69] Setting ingress=true in profile "addons-757656"
	I1018 08:30:08.062913   10741 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-757656"
	I1018 08:30:08.062949   10741 addons.go:238] Setting addon registry=true in "addons-757656"
	I1018 08:30:08.062952   10741 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-757656"
	I1018 08:30:08.062968   10741 host.go:66] Checking if "addons-757656" exists ...
	I1018 08:30:08.062972   10741 addons.go:238] Setting addon ingress=true in "addons-757656"
	I1018 08:30:08.063004   10741 host.go:66] Checking if "addons-757656" exists ...
	I1018 08:30:08.063004   10741 addons.go:69] Setting volcano=true in profile "addons-757656"
	I1018 08:30:08.063019   10741 addons.go:238] Setting addon volcano=true in "addons-757656"
	I1018 08:30:08.063034   10741 addons.go:69] Setting ingress-dns=true in profile "addons-757656"
	I1018 08:30:08.063036   10741 host.go:66] Checking if "addons-757656" exists ...
	I1018 08:30:08.063042   10741 host.go:66] Checking if "addons-757656" exists ...
	I1018 08:30:08.063047   10741 addons.go:238] Setting addon ingress-dns=true in "addons-757656"
	I1018 08:30:08.063065   10741 config.go:182] Loaded profile config "addons-757656": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:30:08.063083   10741 host.go:66] Checking if "addons-757656" exists ...
	I1018 08:30:08.063308   10741 cli_runner.go:164] Run: docker container inspect addons-757656 --format={{.State.Status}}
	I1018 08:30:08.063377   10741 cli_runner.go:164] Run: docker container inspect addons-757656 --format={{.State.Status}}
	I1018 08:30:08.063471   10741 cli_runner.go:164] Run: docker container inspect addons-757656 --format={{.State.Status}}
	I1018 08:30:08.063486   10741 cli_runner.go:164] Run: docker container inspect addons-757656 --format={{.State.Status}}
	I1018 08:30:08.063516   10741 cli_runner.go:164] Run: docker container inspect addons-757656 --format={{.State.Status}}
	I1018 08:30:08.063525   10741 cli_runner.go:164] Run: docker container inspect addons-757656 --format={{.State.Status}}
	I1018 08:30:08.063567   10741 cli_runner.go:164] Run: docker container inspect addons-757656 --format={{.State.Status}}
	I1018 08:30:08.063598   10741 cli_runner.go:164] Run: docker container inspect addons-757656 --format={{.State.Status}}
	I1018 08:30:08.062807   10741 addons.go:238] Setting addon yakd=true in "addons-757656"
	I1018 08:30:08.063931   10741 host.go:66] Checking if "addons-757656" exists ...
	I1018 08:30:08.064191   10741 addons.go:69] Setting volumesnapshots=true in profile "addons-757656"
	I1018 08:30:08.064224   10741 addons.go:238] Setting addon volumesnapshots=true in "addons-757656"
	I1018 08:30:08.064251   10741 addons.go:69] Setting cloud-spanner=true in profile "addons-757656"
	I1018 08:30:08.064263   10741 host.go:66] Checking if "addons-757656" exists ...
	I1018 08:30:08.064271   10741 addons.go:238] Setting addon cloud-spanner=true in "addons-757656"
	I1018 08:30:08.064320   10741 host.go:66] Checking if "addons-757656" exists ...
	I1018 08:30:08.064454   10741 cli_runner.go:164] Run: docker container inspect addons-757656 --format={{.State.Status}}
	I1018 08:30:08.064946   10741 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-757656"
	I1018 08:30:08.064972   10741 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-757656"
	I1018 08:30:08.064974   10741 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-757656"
	I1018 08:30:08.064990   10741 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-757656"
	I1018 08:30:08.065010   10741 host.go:66] Checking if "addons-757656" exists ...
	I1018 08:30:08.065338   10741 addons.go:69] Setting metrics-server=true in profile "addons-757656"
	I1018 08:30:08.065374   10741 addons.go:238] Setting addon metrics-server=true in "addons-757656"
	I1018 08:30:08.065385   10741 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-757656"
	I1018 08:30:08.065400   10741 host.go:66] Checking if "addons-757656" exists ...
	I1018 08:30:08.065448   10741 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-757656"
	I1018 08:30:08.065481   10741 host.go:66] Checking if "addons-757656" exists ...
	I1018 08:30:08.062786   10741 config.go:182] Loaded profile config "addons-757656": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:30:08.062891   10741 host.go:66] Checking if "addons-757656" exists ...
	I1018 08:30:08.066026   10741 out.go:179] * Verifying Kubernetes components...
	I1018 08:30:08.062802   10741 addons.go:69] Setting inspektor-gadget=true in profile "addons-757656"
	I1018 08:30:08.066305   10741 addons.go:238] Setting addon inspektor-gadget=true in "addons-757656"
	I1018 08:30:08.066366   10741 host.go:66] Checking if "addons-757656" exists ...
	I1018 08:30:08.067307   10741 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 08:30:08.076030   10741 cli_runner.go:164] Run: docker container inspect addons-757656 --format={{.State.Status}}
	I1018 08:30:08.076052   10741 cli_runner.go:164] Run: docker container inspect addons-757656 --format={{.State.Status}}
	I1018 08:30:08.076060   10741 cli_runner.go:164] Run: docker container inspect addons-757656 --format={{.State.Status}}
	I1018 08:30:08.076897   10741 cli_runner.go:164] Run: docker container inspect addons-757656 --format={{.State.Status}}
	I1018 08:30:08.077252   10741 cli_runner.go:164] Run: docker container inspect addons-757656 --format={{.State.Status}}
	I1018 08:30:08.076032   10741 cli_runner.go:164] Run: docker container inspect addons-757656 --format={{.State.Status}}
	I1018 08:30:08.079961   10741 cli_runner.go:164] Run: docker container inspect addons-757656 --format={{.State.Status}}
	I1018 08:30:08.080371   10741 cli_runner.go:164] Run: docker container inspect addons-757656 --format={{.State.Status}}
	I1018 08:30:08.115504   10741 addons.go:238] Setting addon default-storageclass=true in "addons-757656"
	I1018 08:30:08.115556   10741 host.go:66] Checking if "addons-757656" exists ...
	I1018 08:30:08.116025   10741 cli_runner.go:164] Run: docker container inspect addons-757656 --format={{.State.Status}}
	I1018 08:30:08.119950   10741 host.go:66] Checking if "addons-757656" exists ...
	I1018 08:30:08.132184   10741 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 08:30:08.133886   10741 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 08:30:08.133914   10741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 08:30:08.134015   10741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:30:08.143614   10741 out.go:179]   - Using image docker.io/registry:3.0.0
	I1018 08:30:08.145710   10741 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1018 08:30:08.143614   10741 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1018 08:30:08.149055   10741 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1018 08:30:08.149072   10741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1018 08:30:08.149134   10741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:30:08.149440   10741 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1018 08:30:08.149458   10741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1018 08:30:08.149514   10741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:30:08.153746   10741 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1018 08:30:08.158087   10741 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1018 08:30:08.158113   10741 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1018 08:30:08.158186   10741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:30:08.159026   10741 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 08:30:08.159040   10741 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 08:30:08.159110   10741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:30:08.168681   10741 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1018 08:30:08.175939   10741 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	W1018 08:30:08.177320   10741 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1018 08:30:08.177876   10741 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1018 08:30:08.178060   10741 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1018 08:30:08.178274   10741 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1018 08:30:08.178288   10741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1018 08:30:08.178359   10741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:30:08.179997   10741 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1018 08:30:08.180010   10741 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1018 08:30:08.180068   10741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:30:08.180788   10741 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1018 08:30:08.181965   10741 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1018 08:30:08.183357   10741 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1018 08:30:08.185079   10741 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1018 08:30:08.186420   10741 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1018 08:30:08.190595   10741 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1018 08:30:08.191653   10741 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1018 08:30:08.191677   10741 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1018 08:30:08.191747   10741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:30:08.200769   10741 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1018 08:30:08.201961   10741 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1018 08:30:08.201980   10741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1018 08:30:08.202034   10741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:30:08.204075   10741 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1018 08:30:08.205063   10741 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1018 08:30:08.205080   10741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1018 08:30:08.205143   10741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:30:08.208759   10741 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1018 08:30:08.208765   10741 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1018 08:30:08.209701   10741 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1018 08:30:08.209721   10741 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1018 08:30:08.209775   10741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:30:08.209963   10741 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1018 08:30:08.209974   10741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1018 08:30:08.210014   10741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:30:08.212104   10741 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 08:30:08.214481   10741 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 08:30:08.215487   10741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 08:30:08.216535   10741 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1018 08:30:08.220194   10741 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1018 08:30:08.220732   10741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1018 08:30:08.220945   10741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:30:08.222113   10741 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-757656"
	I1018 08:30:08.222633   10741 host.go:66] Checking if "addons-757656" exists ...
	I1018 08:30:08.223105   10741 cli_runner.go:164] Run: docker container inspect addons-757656 --format={{.State.Status}}
	I1018 08:30:08.236661   10741 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1018 08:30:08.238385   10741 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1018 08:30:08.238433   10741 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1018 08:30:08.238536   10741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:30:08.243897   10741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/addons-757656/id_rsa Username:docker}
	I1018 08:30:08.245020   10741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/addons-757656/id_rsa Username:docker}
	I1018 08:30:08.264066   10741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/addons-757656/id_rsa Username:docker}
	I1018 08:30:08.265003   10741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/addons-757656/id_rsa Username:docker}
	I1018 08:30:08.271060   10741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/addons-757656/id_rsa Username:docker}
	I1018 08:30:08.282487   10741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/addons-757656/id_rsa Username:docker}
	I1018 08:30:08.284387   10741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/addons-757656/id_rsa Username:docker}
	I1018 08:30:08.285787   10741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/addons-757656/id_rsa Username:docker}
	I1018 08:30:08.292444   10741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/addons-757656/id_rsa Username:docker}
	I1018 08:30:08.295147   10741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/addons-757656/id_rsa Username:docker}
	I1018 08:30:08.296149   10741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/addons-757656/id_rsa Username:docker}
	I1018 08:30:08.301953   10741 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1018 08:30:08.303214   10741 out.go:179]   - Using image docker.io/busybox:stable
	I1018 08:30:08.304289   10741 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1018 08:30:08.304327   10741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1018 08:30:08.304409   10741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:30:08.306229   10741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/addons-757656/id_rsa Username:docker}
	W1018 08:30:08.307500   10741 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1018 08:30:08.307543   10741 retry.go:31] will retry after 353.832763ms: ssh: handshake failed: EOF
	I1018 08:30:08.313957   10741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/addons-757656/id_rsa Username:docker}
	I1018 08:30:08.316782   10741 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 08:30:08.323432   10741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/addons-757656/id_rsa Username:docker}
	I1018 08:30:08.337600   10741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/addons-757656/id_rsa Username:docker}
	I1018 08:30:08.434002   10741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 08:30:08.434383   10741 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1018 08:30:08.434405   10741 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1018 08:30:08.436957   10741 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:30:08.436982   10741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1018 08:30:08.440330   10741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1018 08:30:08.456506   10741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:30:08.456664   10741 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1018 08:30:08.456680   10741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1018 08:30:08.460063   10741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1018 08:30:08.462786   10741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1018 08:30:08.487136   10741 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1018 08:30:08.487214   10741 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1018 08:30:08.487661   10741 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1018 08:30:08.487680   10741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1018 08:30:08.489040   10741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 08:30:08.490624   10741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1018 08:30:08.498029   10741 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1018 08:30:08.498054   10741 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1018 08:30:08.498080   10741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1018 08:30:08.503599   10741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1018 08:30:08.518671   10741 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1018 08:30:08.518696   10741 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1018 08:30:08.520662   10741 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1018 08:30:08.520681   10741 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1018 08:30:08.520933   10741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1018 08:30:08.529853   10741 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1018 08:30:08.529879   10741 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1018 08:30:08.540482   10741 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1018 08:30:08.540521   10741 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1018 08:30:08.563145   10741 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1018 08:30:08.563178   10741 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1018 08:30:08.566194   10741 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1018 08:30:08.566220   10741 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1018 08:30:08.571381   10741 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1018 08:30:08.571402   10741 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1018 08:30:08.604181   10741 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1018 08:30:08.604207   10741 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1018 08:30:08.613952   10741 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1018 08:30:08.613979   10741 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1018 08:30:08.622399   10741 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1018 08:30:08.622444   10741 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1018 08:30:08.630556   10741 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1018 08:30:08.630578   10741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1018 08:30:08.673202   10741 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 08:30:08.673292   10741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1018 08:30:08.684363   10741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1018 08:30:08.689130   10741 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1018 08:30:08.691582   10741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1018 08:30:08.691703   10741 node_ready.go:35] waiting up to 6m0s for node "addons-757656" to be "Ready" ...
	I1018 08:30:08.707221   10741 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1018 08:30:08.707254   10741 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1018 08:30:08.725222   10741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 08:30:08.778410   10741 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1018 08:30:08.778440   10741 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1018 08:30:08.810635   10741 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1018 08:30:08.810676   10741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1018 08:30:08.857616   10741 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1018 08:30:08.857658   10741 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1018 08:30:08.878004   10741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1018 08:30:08.913880   10741 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1018 08:30:08.913909   10741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1018 08:30:08.966854   10741 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1018 08:30:08.966884   10741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1018 08:30:09.025897   10741 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1018 08:30:09.025933   10741 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1018 08:30:09.060786   10741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1018 08:30:09.205398   10741 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-757656" context rescaled to 1 replicas
	W1018 08:30:09.380173   10741 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:09.380206   10741 retry.go:31] will retry after 348.936188ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:09.380277   10741 addons.go:479] Verifying addon registry=true in "addons-757656"
	I1018 08:30:09.382639   10741 out.go:179] * Verifying registry addon...
	I1018 08:30:09.386413   10741 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1018 08:30:09.394136   10741 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1018 08:30:09.394164   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1018 08:30:09.397613   10741 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1018 08:30:09.700863   10741 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.17989307s)
	I1018 08:30:09.700916   10741 addons.go:479] Verifying addon ingress=true in "addons-757656"
	I1018 08:30:09.701039   10741 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.016639263s)
	I1018 08:30:09.701064   10741 addons.go:479] Verifying addon metrics-server=true in "addons-757656"
	I1018 08:30:09.701143   10741 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.009479928s)
	I1018 08:30:09.702843   10741 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-757656 service yakd-dashboard -n yakd-dashboard
	
	I1018 08:30:09.702859   10741 out.go:179] * Verifying ingress addon...
	I1018 08:30:09.705486   10741 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1018 08:30:09.708308   10741 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1018 08:30:09.708324   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:09.730019   10741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:30:09.889892   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:10.146472   10741 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.268430636s)
	I1018 08:30:10.146480   10741 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.421211137s)
	W1018 08:30:10.146535   10741 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1018 08:30:10.146567   10741 retry.go:31] will retry after 270.293079ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1018 08:30:10.146697   10741 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.085874503s)
	I1018 08:30:10.146723   10741 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-757656"
	I1018 08:30:10.148130   10741 out.go:179] * Verifying csi-hostpath-driver addon...
	I1018 08:30:10.150570   10741 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1018 08:30:10.152985   10741 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1018 08:30:10.153009   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:10.208289   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:10.389378   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1018 08:30:10.404827   10741 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:10.404859   10741 retry.go:31] will retry after 420.124511ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:10.417954   10741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 08:30:10.654608   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 08:30:10.695017   10741 node_ready.go:57] node "addons-757656" has "Ready":"False" status (will retry)
	I1018 08:30:10.708615   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:10.826041   10741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:30:10.889464   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:11.153614   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:11.254890   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:11.390187   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:11.653778   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:11.708970   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:11.889774   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:12.154063   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:12.208094   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:12.389412   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:12.653929   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:12.754476   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:12.878483   10741 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.460476362s)
	I1018 08:30:12.878541   10741 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.052463839s)
	W1018 08:30:12.878580   10741 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:12.878600   10741 retry.go:31] will retry after 406.363652ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:12.889244   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:13.153812   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 08:30:13.195269   10741 node_ready.go:57] node "addons-757656" has "Ready":"False" status (will retry)
	I1018 08:30:13.254603   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:13.285588   10741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:30:13.389244   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:13.654367   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:13.708492   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 08:30:13.823365   10741 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:13.823395   10741 retry.go:31] will retry after 811.525025ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:13.890324   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:14.154583   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:14.208500   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:14.389892   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:14.635522   10741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:30:14.654065   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:14.709215   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:14.889335   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:15.154083   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 08:30:15.170761   10741 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:15.170788   10741 retry.go:31] will retry after 1.254858149s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:15.208842   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:15.389887   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:15.654194   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 08:30:15.694592   10741 node_ready.go:57] node "addons-757656" has "Ready":"False" status (will retry)
	I1018 08:30:15.709007   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:15.726209   10741 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1018 08:30:15.726284   10741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:30:15.744076   10741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/addons-757656/id_rsa Username:docker}
	I1018 08:30:15.845872   10741 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1018 08:30:15.859016   10741 addons.go:238] Setting addon gcp-auth=true in "addons-757656"
	I1018 08:30:15.859077   10741 host.go:66] Checking if "addons-757656" exists ...
	I1018 08:30:15.859466   10741 cli_runner.go:164] Run: docker container inspect addons-757656 --format={{.State.Status}}
	I1018 08:30:15.877988   10741 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1018 08:30:15.878043   10741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:30:15.890079   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:15.896291   10741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/addons-757656/id_rsa Username:docker}
	I1018 08:30:15.990374   10741 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 08:30:15.991673   10741 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1018 08:30:15.992751   10741 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1018 08:30:15.992783   10741 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1018 08:30:16.006748   10741 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1018 08:30:16.006768   10741 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1018 08:30:16.019865   10741 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1018 08:30:16.019887   10741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1018 08:30:16.032783   10741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1018 08:30:16.153964   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:16.209162   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:16.344643   10741 addons.go:479] Verifying addon gcp-auth=true in "addons-757656"
	I1018 08:30:16.346395   10741 out.go:179] * Verifying gcp-auth addon...
	I1018 08:30:16.350411   10741 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1018 08:30:16.352629   10741 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1018 08:30:16.352649   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:16.389027   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:16.426329   10741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:30:16.653410   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:16.708527   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:16.854205   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:16.888732   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1018 08:30:16.959586   10741 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:16.959615   10741 retry.go:31] will retry after 1.129108218s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:17.153489   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:17.208470   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:17.354556   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:17.389737   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:17.653794   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 08:30:17.695042   10741 node_ready.go:57] node "addons-757656" has "Ready":"False" status (will retry)
	I1018 08:30:17.709514   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:17.853034   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:17.889591   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:18.089864   10741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:30:18.154109   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:18.208572   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:18.353092   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:18.388887   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1018 08:30:18.620958   10741 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:18.620982   10741 retry.go:31] will retry after 3.766935063s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:18.653444   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:18.708698   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:18.853427   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:18.889931   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:19.153966   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:19.209019   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:19.353565   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:19.388984   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:19.654066   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:19.708565   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:19.854002   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:19.889243   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:20.154070   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 08:30:20.194673   10741 node_ready.go:57] node "addons-757656" has "Ready":"False" status (will retry)
	I1018 08:30:20.208131   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:20.353497   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:20.390103   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:20.653898   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:20.709097   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:20.853592   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:20.889252   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:21.153853   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:21.208791   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:21.353440   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:21.390019   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:21.654327   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:21.708520   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:21.853490   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:21.890043   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:22.153781   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 08:30:22.195130   10741 node_ready.go:57] node "addons-757656" has "Ready":"False" status (will retry)
	I1018 08:30:22.208802   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:22.353286   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:22.388427   10741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:30:22.389781   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:22.654272   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:22.708967   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:22.853630   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:22.889458   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1018 08:30:22.929469   10741 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:22.929503   10741 retry.go:31] will retry after 2.49806791s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:23.153158   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:23.209038   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:23.353683   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:23.389134   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:23.654551   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:23.708525   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:23.853914   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:23.889160   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:24.154086   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:24.208838   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:24.353319   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:24.389759   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:24.653284   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 08:30:24.694621   10741 node_ready.go:57] node "addons-757656" has "Ready":"False" status (will retry)
	I1018 08:30:24.708054   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:24.853666   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:24.889183   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:25.153913   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:25.208650   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:25.353731   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:25.389331   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:25.428600   10741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:30:25.654081   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:25.709420   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:25.853246   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:25.890032   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1018 08:30:25.966944   10741 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:25.966982   10741 retry.go:31] will retry after 7.110811732s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:26.153730   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:26.208475   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:26.352933   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:26.389458   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:26.653324   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 08:30:26.694681   10741 node_ready.go:57] node "addons-757656" has "Ready":"False" status (will retry)
	I1018 08:30:26.708438   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:26.853902   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:26.889141   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:27.153980   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:27.209063   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:27.353657   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:27.388938   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:27.654120   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:27.709126   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:27.853600   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:27.891655   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:28.153720   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:28.208592   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:28.352958   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:28.389488   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:28.653872   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 08:30:28.695270   10741 node_ready.go:57] node "addons-757656" has "Ready":"False" status (will retry)
	I1018 08:30:28.709275   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:28.853982   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:28.889582   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:29.153547   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:29.208547   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:29.354199   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:29.389697   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:29.653527   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:29.708092   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:29.853625   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:29.889008   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:30.153651   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:30.208468   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:30.354025   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:30.389582   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:30.653249   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:30.708400   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:30.853791   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:30.889104   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:31.153888   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 08:30:31.194244   10741 node_ready.go:57] node "addons-757656" has "Ready":"False" status (will retry)
	I1018 08:30:31.208759   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:31.353632   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:31.389113   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:31.654167   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:31.708515   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:31.854429   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:31.889768   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:32.153460   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:32.208378   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:32.353612   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:32.388995   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:32.653623   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:32.708717   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:32.853013   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:32.889578   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:33.078886   10741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:30:33.154130   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 08:30:33.194624   10741 node_ready.go:57] node "addons-757656" has "Ready":"False" status (will retry)
	I1018 08:30:33.208172   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:33.353802   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:33.389256   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1018 08:30:33.619120   10741 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:33.619146   10741 retry.go:31] will retry after 7.500285491s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:33.654093   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:33.708064   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:33.853440   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:33.889828   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:34.153618   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:34.208376   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:34.354091   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:34.389984   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:34.653691   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:34.708416   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:34.853998   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:34.889542   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:35.153216   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 08:30:35.194789   10741 node_ready.go:57] node "addons-757656" has "Ready":"False" status (will retry)
	I1018 08:30:35.208512   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:35.354051   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:35.389538   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:35.653407   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:35.708540   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:35.852890   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:35.889336   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:36.154010   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:36.208706   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:36.353261   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:36.389809   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:36.653656   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:36.708849   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:36.853334   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:36.889830   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:37.153701   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 08:30:37.199056   10741 node_ready.go:57] node "addons-757656" has "Ready":"False" status (will retry)
	I1018 08:30:37.208380   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:37.354115   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:37.389677   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:37.653236   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:37.708281   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:37.853835   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:37.889267   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:38.153158   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:38.208464   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:38.352953   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:38.389535   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:38.653476   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:38.708857   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:38.853416   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:38.890016   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:39.154005   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:39.208368   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:39.353824   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:39.389252   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:39.654425   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 08:30:39.694944   10741 node_ready.go:57] node "addons-757656" has "Ready":"False" status (will retry)
	I1018 08:30:39.708663   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:39.853119   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:39.889813   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:40.153834   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:40.209306   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:40.353789   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:40.389304   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:40.654195   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:40.708517   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:40.853242   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:40.889643   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:41.119991   10741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:30:41.154081   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:41.208451   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:41.353222   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:41.389944   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:41.654083   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 08:30:41.669670   10741 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:41.669706   10741 retry.go:31] will retry after 20.877241923s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1018 08:30:41.695168   10741 node_ready.go:57] node "addons-757656" has "Ready":"False" status (will retry)
	I1018 08:30:41.708895   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:41.854134   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:41.889997   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:42.154046   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:42.208208   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:42.354002   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:42.389517   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:42.653333   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:42.709117   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:42.853701   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:42.889265   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:43.154254   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:43.208469   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:43.353933   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:43.389407   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:43.654149   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:43.708515   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:43.853981   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:43.889447   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:44.153595   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 08:30:44.194936   10741 node_ready.go:57] node "addons-757656" has "Ready":"False" status (will retry)
	I1018 08:30:44.208478   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:44.353922   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:44.389530   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:44.653043   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:44.709373   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:44.853899   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:44.889500   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:45.154112   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:45.209110   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:45.353879   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:45.389304   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:45.654083   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:45.709210   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:45.853559   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:45.888864   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:46.153594   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 08:30:46.195038   10741 node_ready.go:57] node "addons-757656" has "Ready":"False" status (will retry)
	I1018 08:30:46.208820   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:46.354014   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:46.389476   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:46.653223   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:46.708267   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:46.853862   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:46.889374   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:47.154148   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:47.208557   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:47.353285   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:47.389645   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:47.653377   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:47.708611   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:47.853087   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:47.889758   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:48.153779   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 08:30:48.195134   10741 node_ready.go:57] node "addons-757656" has "Ready":"False" status (will retry)
	I1018 08:30:48.208949   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:48.353397   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:48.389836   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:48.653642   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:48.709168   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:48.853610   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:48.889101   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:49.154061   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:49.208574   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:49.354035   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:49.389420   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:49.654088   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:49.708970   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:49.852983   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:49.889183   10741 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1018 08:30:49.889213   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:50.156221   10741 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1018 08:30:50.156250   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:50.194415   10741 node_ready.go:49] node "addons-757656" is "Ready"
	I1018 08:30:50.194445   10741 node_ready.go:38] duration metric: took 41.502617212s for node "addons-757656" to be "Ready" ...
	I1018 08:30:50.194462   10741 api_server.go:52] waiting for apiserver process to appear ...
	I1018 08:30:50.194518   10741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 08:30:50.214645   10741 api_server.go:72] duration metric: took 42.152040104s to wait for apiserver process to appear ...
	I1018 08:30:50.214671   10741 api_server.go:88] waiting for apiserver healthz status ...
	I1018 08:30:50.214693   10741 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 08:30:50.223540   10741 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1018 08:30:50.224964   10741 api_server.go:141] control plane version: v1.34.1
	I1018 08:30:50.225048   10741 api_server.go:131] duration metric: took 10.368429ms to wait for apiserver health ...
	I1018 08:30:50.225074   10741 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 08:30:50.256938   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:50.258143   10741 system_pods.go:59] 20 kube-system pods found
	I1018 08:30:50.258186   10741 system_pods.go:61] "amd-gpu-device-plugin-v82lt" [24a1cd58-553e-4bee-beaf-75f0e39eeb29] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1018 08:30:50.258207   10741 system_pods.go:61] "coredns-66bc5c9577-jc8rc" [71b4dffb-1fac-41b5-a0d1-00bd35170863] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 08:30:50.258219   10741 system_pods.go:61] "csi-hostpath-attacher-0" [cd1b553c-f507-4a00-be3f-112646d6bac9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 08:30:50.258229   10741 system_pods.go:61] "csi-hostpath-resizer-0" [33e2db37-d8fa-46ef-9e57-c95189e22be9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 08:30:50.258238   10741 system_pods.go:61] "csi-hostpathplugin-cdc5c" [2f6ff4c2-5cba-44ce-8782-9c83b09037d1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 08:30:50.258258   10741 system_pods.go:61] "etcd-addons-757656" [a652fa9c-0b57-403a-988b-2523ea85d6a1] Running
	I1018 08:30:50.258263   10741 system_pods.go:61] "kindnet-tdxms" [cbe08d6f-c4c2-4fea-a63d-61727b12b409] Running
	I1018 08:30:50.258268   10741 system_pods.go:61] "kube-apiserver-addons-757656" [97d192e7-983b-42e7-819c-f8f37eb0e4c1] Running
	I1018 08:30:50.258273   10741 system_pods.go:61] "kube-controller-manager-addons-757656" [7095119c-78e7-4f0d-9d7d-7163b936265f] Running
	I1018 08:30:50.258281   10741 system_pods.go:61] "kube-ingress-dns-minikube" [03e9ac2e-0063-4caf-a121-5001352c7116] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 08:30:50.258286   10741 system_pods.go:61] "kube-proxy-gw6hz" [ab15e9e6-a4b9-435f-b1e3-edb00fd5bac3] Running
	I1018 08:30:50.258293   10741 system_pods.go:61] "kube-scheduler-addons-757656" [d0cd2940-7f92-4160-8c18-6e43e28c485a] Running
	I1018 08:30:50.258301   10741 system_pods.go:61] "metrics-server-85b7d694d7-vl9c2" [662d002a-23b4-4f7f-a0bc-3a0813819aa2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 08:30:50.258310   10741 system_pods.go:61] "nvidia-device-plugin-daemonset-bnzlc" [6bc7ead8-1876-4ced-8f8f-ca3c9e987a0e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 08:30:50.258318   10741 system_pods.go:61] "registry-6b586f9694-lbbgc" [f8fb4269-2c69-4e40-ac5a-1a96111c3b97] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 08:30:50.258334   10741 system_pods.go:61] "registry-creds-764b6fb674-h7xh9" [ef91ed70-4df9-4356-8c81-61877956a49a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 08:30:50.258366   10741 system_pods.go:61] "registry-proxy-7g848" [66b8f065-f87f-49bb-a6ff-d7474f5b093b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 08:30:50.258375   10741 system_pods.go:61] "snapshot-controller-7d9fbc56b8-7zt7h" [dce8dc99-b0dc-4597-a45a-706a104f0579] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 08:30:50.258384   10741 system_pods.go:61] "snapshot-controller-7d9fbc56b8-nbz64" [3c9a5a42-a8e5-40b0-bea3-3e38e6f5ec23] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 08:30:50.258391   10741 system_pods.go:61] "storage-provisioner" [814600d1-fc8c-417d-8d70-98d698fd7d63] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 08:30:50.258400   10741 system_pods.go:74] duration metric: took 33.308827ms to wait for pod list to return data ...
	I1018 08:30:50.258412   10741 default_sa.go:34] waiting for default service account to be created ...
	I1018 08:30:50.261058   10741 default_sa.go:45] found service account: "default"
	I1018 08:30:50.261078   10741 default_sa.go:55] duration metric: took 2.659425ms for default service account to be created ...
	I1018 08:30:50.261088   10741 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 08:30:50.264095   10741 system_pods.go:86] 20 kube-system pods found
	I1018 08:30:50.264123   10741 system_pods.go:89] "amd-gpu-device-plugin-v82lt" [24a1cd58-553e-4bee-beaf-75f0e39eeb29] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1018 08:30:50.264130   10741 system_pods.go:89] "coredns-66bc5c9577-jc8rc" [71b4dffb-1fac-41b5-a0d1-00bd35170863] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 08:30:50.264137   10741 system_pods.go:89] "csi-hostpath-attacher-0" [cd1b553c-f507-4a00-be3f-112646d6bac9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 08:30:50.264144   10741 system_pods.go:89] "csi-hostpath-resizer-0" [33e2db37-d8fa-46ef-9e57-c95189e22be9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 08:30:50.264150   10741 system_pods.go:89] "csi-hostpathplugin-cdc5c" [2f6ff4c2-5cba-44ce-8782-9c83b09037d1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 08:30:50.264159   10741 system_pods.go:89] "etcd-addons-757656" [a652fa9c-0b57-403a-988b-2523ea85d6a1] Running
	I1018 08:30:50.264163   10741 system_pods.go:89] "kindnet-tdxms" [cbe08d6f-c4c2-4fea-a63d-61727b12b409] Running
	I1018 08:30:50.264166   10741 system_pods.go:89] "kube-apiserver-addons-757656" [97d192e7-983b-42e7-819c-f8f37eb0e4c1] Running
	I1018 08:30:50.264170   10741 system_pods.go:89] "kube-controller-manager-addons-757656" [7095119c-78e7-4f0d-9d7d-7163b936265f] Running
	I1018 08:30:50.264175   10741 system_pods.go:89] "kube-ingress-dns-minikube" [03e9ac2e-0063-4caf-a121-5001352c7116] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 08:30:50.264183   10741 system_pods.go:89] "kube-proxy-gw6hz" [ab15e9e6-a4b9-435f-b1e3-edb00fd5bac3] Running
	I1018 08:30:50.264188   10741 system_pods.go:89] "kube-scheduler-addons-757656" [d0cd2940-7f92-4160-8c18-6e43e28c485a] Running
	I1018 08:30:50.264194   10741 system_pods.go:89] "metrics-server-85b7d694d7-vl9c2" [662d002a-23b4-4f7f-a0bc-3a0813819aa2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 08:30:50.264203   10741 system_pods.go:89] "nvidia-device-plugin-daemonset-bnzlc" [6bc7ead8-1876-4ced-8f8f-ca3c9e987a0e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 08:30:50.264209   10741 system_pods.go:89] "registry-6b586f9694-lbbgc" [f8fb4269-2c69-4e40-ac5a-1a96111c3b97] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 08:30:50.264216   10741 system_pods.go:89] "registry-creds-764b6fb674-h7xh9" [ef91ed70-4df9-4356-8c81-61877956a49a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 08:30:50.264223   10741 system_pods.go:89] "registry-proxy-7g848" [66b8f065-f87f-49bb-a6ff-d7474f5b093b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 08:30:50.264228   10741 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7zt7h" [dce8dc99-b0dc-4597-a45a-706a104f0579] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 08:30:50.264233   10741 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nbz64" [3c9a5a42-a8e5-40b0-bea3-3e38e6f5ec23] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 08:30:50.264239   10741 system_pods.go:89] "storage-provisioner" [814600d1-fc8c-417d-8d70-98d698fd7d63] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 08:30:50.264252   10741 retry.go:31] will retry after 242.093194ms: missing components: kube-dns
	I1018 08:30:50.353662   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:50.389579   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:50.511058   10741 system_pods.go:86] 20 kube-system pods found
	I1018 08:30:50.511096   10741 system_pods.go:89] "amd-gpu-device-plugin-v82lt" [24a1cd58-553e-4bee-beaf-75f0e39eeb29] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1018 08:30:50.511108   10741 system_pods.go:89] "coredns-66bc5c9577-jc8rc" [71b4dffb-1fac-41b5-a0d1-00bd35170863] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 08:30:50.511119   10741 system_pods.go:89] "csi-hostpath-attacher-0" [cd1b553c-f507-4a00-be3f-112646d6bac9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 08:30:50.511130   10741 system_pods.go:89] "csi-hostpath-resizer-0" [33e2db37-d8fa-46ef-9e57-c95189e22be9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 08:30:50.511158   10741 system_pods.go:89] "csi-hostpathplugin-cdc5c" [2f6ff4c2-5cba-44ce-8782-9c83b09037d1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 08:30:50.511170   10741 system_pods.go:89] "etcd-addons-757656" [a652fa9c-0b57-403a-988b-2523ea85d6a1] Running
	I1018 08:30:50.511182   10741 system_pods.go:89] "kindnet-tdxms" [cbe08d6f-c4c2-4fea-a63d-61727b12b409] Running
	I1018 08:30:50.511191   10741 system_pods.go:89] "kube-apiserver-addons-757656" [97d192e7-983b-42e7-819c-f8f37eb0e4c1] Running
	I1018 08:30:50.511197   10741 system_pods.go:89] "kube-controller-manager-addons-757656" [7095119c-78e7-4f0d-9d7d-7163b936265f] Running
	I1018 08:30:50.511210   10741 system_pods.go:89] "kube-ingress-dns-minikube" [03e9ac2e-0063-4caf-a121-5001352c7116] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 08:30:50.511217   10741 system_pods.go:89] "kube-proxy-gw6hz" [ab15e9e6-a4b9-435f-b1e3-edb00fd5bac3] Running
	I1018 08:30:50.511223   10741 system_pods.go:89] "kube-scheduler-addons-757656" [d0cd2940-7f92-4160-8c18-6e43e28c485a] Running
	I1018 08:30:50.511234   10741 system_pods.go:89] "metrics-server-85b7d694d7-vl9c2" [662d002a-23b4-4f7f-a0bc-3a0813819aa2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 08:30:50.511245   10741 system_pods.go:89] "nvidia-device-plugin-daemonset-bnzlc" [6bc7ead8-1876-4ced-8f8f-ca3c9e987a0e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 08:30:50.511258   10741 system_pods.go:89] "registry-6b586f9694-lbbgc" [f8fb4269-2c69-4e40-ac5a-1a96111c3b97] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 08:30:50.511269   10741 system_pods.go:89] "registry-creds-764b6fb674-h7xh9" [ef91ed70-4df9-4356-8c81-61877956a49a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 08:30:50.511280   10741 system_pods.go:89] "registry-proxy-7g848" [66b8f065-f87f-49bb-a6ff-d7474f5b093b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 08:30:50.511291   10741 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7zt7h" [dce8dc99-b0dc-4597-a45a-706a104f0579] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 08:30:50.511310   10741 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nbz64" [3c9a5a42-a8e5-40b0-bea3-3e38e6f5ec23] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 08:30:50.511318   10741 system_pods.go:89] "storage-provisioner" [814600d1-fc8c-417d-8d70-98d698fd7d63] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 08:30:50.511357   10741 retry.go:31] will retry after 350.295896ms: missing components: kube-dns
	I1018 08:30:50.654938   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:50.708994   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:50.853922   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:50.866446   10741 system_pods.go:86] 20 kube-system pods found
	I1018 08:30:50.866482   10741 system_pods.go:89] "amd-gpu-device-plugin-v82lt" [24a1cd58-553e-4bee-beaf-75f0e39eeb29] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1018 08:30:50.866491   10741 system_pods.go:89] "coredns-66bc5c9577-jc8rc" [71b4dffb-1fac-41b5-a0d1-00bd35170863] Running
	I1018 08:30:50.866504   10741 system_pods.go:89] "csi-hostpath-attacher-0" [cd1b553c-f507-4a00-be3f-112646d6bac9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 08:30:50.866514   10741 system_pods.go:89] "csi-hostpath-resizer-0" [33e2db37-d8fa-46ef-9e57-c95189e22be9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 08:30:50.866525   10741 system_pods.go:89] "csi-hostpathplugin-cdc5c" [2f6ff4c2-5cba-44ce-8782-9c83b09037d1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 08:30:50.866535   10741 system_pods.go:89] "etcd-addons-757656" [a652fa9c-0b57-403a-988b-2523ea85d6a1] Running
	I1018 08:30:50.866542   10741 system_pods.go:89] "kindnet-tdxms" [cbe08d6f-c4c2-4fea-a63d-61727b12b409] Running
	I1018 08:30:50.866551   10741 system_pods.go:89] "kube-apiserver-addons-757656" [97d192e7-983b-42e7-819c-f8f37eb0e4c1] Running
	I1018 08:30:50.866557   10741 system_pods.go:89] "kube-controller-manager-addons-757656" [7095119c-78e7-4f0d-9d7d-7163b936265f] Running
	I1018 08:30:50.866567   10741 system_pods.go:89] "kube-ingress-dns-minikube" [03e9ac2e-0063-4caf-a121-5001352c7116] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 08:30:50.866582   10741 system_pods.go:89] "kube-proxy-gw6hz" [ab15e9e6-a4b9-435f-b1e3-edb00fd5bac3] Running
	I1018 08:30:50.866591   10741 system_pods.go:89] "kube-scheduler-addons-757656" [d0cd2940-7f92-4160-8c18-6e43e28c485a] Running
	I1018 08:30:50.866599   10741 system_pods.go:89] "metrics-server-85b7d694d7-vl9c2" [662d002a-23b4-4f7f-a0bc-3a0813819aa2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 08:30:50.866612   10741 system_pods.go:89] "nvidia-device-plugin-daemonset-bnzlc" [6bc7ead8-1876-4ced-8f8f-ca3c9e987a0e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 08:30:50.866623   10741 system_pods.go:89] "registry-6b586f9694-lbbgc" [f8fb4269-2c69-4e40-ac5a-1a96111c3b97] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 08:30:50.866631   10741 system_pods.go:89] "registry-creds-764b6fb674-h7xh9" [ef91ed70-4df9-4356-8c81-61877956a49a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 08:30:50.866641   10741 system_pods.go:89] "registry-proxy-7g848" [66b8f065-f87f-49bb-a6ff-d7474f5b093b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 08:30:50.866650   10741 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7zt7h" [dce8dc99-b0dc-4597-a45a-706a104f0579] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 08:30:50.866662   10741 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nbz64" [3c9a5a42-a8e5-40b0-bea3-3e38e6f5ec23] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 08:30:50.866667   10741 system_pods.go:89] "storage-provisioner" [814600d1-fc8c-417d-8d70-98d698fd7d63] Running
	I1018 08:30:50.866679   10741 system_pods.go:126] duration metric: took 605.584736ms to wait for k8s-apps to be running ...
	I1018 08:30:50.866692   10741 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 08:30:50.866750   10741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 08:30:50.884428   10741 system_svc.go:56] duration metric: took 17.72595ms WaitForService to wait for kubelet
	I1018 08:30:50.884468   10741 kubeadm.go:586] duration metric: took 42.821868966s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 08:30:50.884491   10741 node_conditions.go:102] verifying NodePressure condition ...
	I1018 08:30:50.887813   10741 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 08:30:50.887845   10741 node_conditions.go:123] node cpu capacity is 8
	I1018 08:30:50.887864   10741 node_conditions.go:105] duration metric: took 3.367416ms to run NodePressure ...
	I1018 08:30:50.887879   10741 start.go:241] waiting for startup goroutines ...
	I1018 08:30:50.889086   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:51.154207   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:51.209144   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:51.353998   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:51.389564   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:51.654059   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:51.708736   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:51.854310   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:51.890183   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:52.154622   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:52.209294   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:52.354716   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:52.389605   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:52.653981   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:52.754619   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:52.853433   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:52.890610   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:53.153818   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:53.208692   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:53.353486   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:53.390208   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:53.654419   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:53.709150   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:53.853766   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:53.889548   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:54.153621   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:54.208698   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:54.353515   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:54.390475   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:54.654778   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:54.708770   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:54.853592   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:54.889062   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:55.153835   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:55.208296   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:55.354005   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:55.389465   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:55.655240   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:55.709293   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:55.854754   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:55.955138   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:56.154120   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:56.208836   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:56.353890   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:56.389605   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:56.656261   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:56.711096   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:56.854454   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:56.891124   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:57.154628   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:57.209555   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:57.354501   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:57.389413   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:57.655262   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:57.709389   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:57.854326   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:57.890281   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:58.154938   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:58.208651   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:58.353861   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:58.389989   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:58.655104   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:58.708955   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:58.853840   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:58.889957   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:59.154485   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:59.209056   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:59.353931   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:59.389879   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:59.655463   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:59.709267   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:59.853904   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:59.889698   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:00.154402   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:00.209457   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:00.354107   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:00.390009   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:00.654242   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:00.709045   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:00.854002   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:00.910891   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:01.154358   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:01.209321   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:01.355600   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:01.390926   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:01.654050   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:01.709136   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:01.854680   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:01.889644   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:02.154162   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:02.254872   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:02.354597   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:02.389531   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:02.547579   10741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:31:02.654161   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:02.709471   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:02.854377   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:02.890011   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:03.154625   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:03.209619   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 08:31:03.267712   10741 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:31:03.267750   10741 retry.go:31] will retry after 23.096835382s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:31:03.354015   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:03.390686   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:03.653574   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:03.709960   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:03.853701   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:03.889697   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:04.154484   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:04.209326   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:04.354655   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:04.389757   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:04.656727   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:04.765300   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:04.867594   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:04.965942   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:05.153727   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:05.208190   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:05.353620   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:05.389276   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:05.660733   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:05.711073   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:05.854250   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:05.890289   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:06.155022   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:06.208913   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:06.354220   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:06.390138   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:06.654333   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:06.709422   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:06.854133   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:06.889728   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:07.153663   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:07.208195   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:07.354145   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:07.389864   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:07.654919   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:07.709778   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:07.853655   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:07.889148   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:08.155140   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:08.208873   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:08.354458   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:08.390320   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:08.654681   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:08.709910   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:08.853248   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:08.889927   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:09.154009   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:09.209037   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:09.353797   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:09.389660   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:09.654114   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:09.708502   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:09.853936   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:09.889613   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:10.153788   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:10.208098   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:10.354032   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:10.389750   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:10.654192   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:10.709408   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:10.853784   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:10.889219   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:11.155387   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:11.210155   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:11.355188   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:11.391288   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:11.656180   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:11.710892   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:11.854256   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:11.890121   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:12.155187   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:12.209387   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:12.355185   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:12.389848   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:12.654251   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:12.709273   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:12.854011   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:12.890138   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:13.155234   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:13.208841   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:13.353797   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:13.389643   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:13.654293   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:13.709407   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:13.854147   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:13.890156   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:14.154645   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:14.209667   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:14.353414   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:14.390724   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:14.654850   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:14.708766   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:14.853590   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:14.889707   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:15.153919   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:15.208859   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:15.353892   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:15.389952   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:15.654439   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:15.709192   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:15.853717   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:15.889829   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:16.153391   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:16.209242   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:16.354120   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:16.391096   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:16.654591   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:16.754599   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:16.853723   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:16.889770   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:17.154066   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:17.208890   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:17.353793   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:17.389705   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:17.653872   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:17.754425   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:17.854290   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:17.889983   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:18.154510   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:18.208992   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:18.353629   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:18.389244   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:18.654710   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:18.713227   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:18.853872   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:18.889768   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:19.153958   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:19.208704   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:19.353223   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:19.389708   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:19.654523   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:19.708836   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:19.853635   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:19.889120   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:20.154821   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:20.208678   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:20.359605   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:20.389062   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:20.654832   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:20.708481   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:20.854403   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:20.955739   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:21.153885   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:21.209783   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:21.354495   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:21.391834   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:21.655769   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:21.716642   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:21.855646   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:21.890486   10741 kapi.go:107] duration metric: took 1m12.50407048s to wait for kubernetes.io/minikube-addons=registry ...
	I1018 08:31:22.153933   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:22.208827   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:22.353510   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:22.653999   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:22.709127   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:22.853856   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:23.154655   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:23.209370   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:23.353996   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:23.654643   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:23.755790   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:23.855942   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:24.154257   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:24.209068   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:24.353673   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:24.726824   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:24.726852   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:24.853674   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:25.154436   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:25.209464   10741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:25.354053   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:25.654527   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:25.741221   10741 kapi.go:107] duration metric: took 1m16.035730152s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1018 08:31:25.854010   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:26.154145   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:26.354440   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:26.365534   10741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:31:26.655489   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:26.853657   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:27.154183   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 08:31:27.169202   10741 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:31:27.169233   10741 retry.go:31] will retry after 19.645870036s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:31:27.354211   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:27.654389   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:27.853861   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:28.154312   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:28.354319   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:28.654515   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:28.853811   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:29.154121   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:29.354064   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:29.654585   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:29.853826   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:30.154886   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:30.354096   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:30.654196   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:30.854583   10741 kapi.go:107] duration metric: took 1m14.504170603s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1018 08:31:30.856876   10741 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-757656 cluster.
	I1018 08:31:30.858375   10741 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1018 08:31:30.859619   10741 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1018 08:31:31.153910   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:31.654469   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:32.154397   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:32.654362   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:33.154007   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:33.655335   10741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:34.153594   10741 kapi.go:107] duration metric: took 1m24.003027194s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1018 08:31:46.817252   10741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1018 08:31:47.354361   10741 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1018 08:31:47.354480   10741 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1018 08:31:47.356291   10741 out.go:179] * Enabled addons: registry-creds, storage-provisioner, nvidia-device-plugin, amd-gpu-device-plugin, cloud-spanner, storage-provisioner-rancher, metrics-server, yakd, ingress-dns, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1018 08:31:47.358050   10741 addons.go:514] duration metric: took 1m39.295411056s for enable addons: enabled=[registry-creds storage-provisioner nvidia-device-plugin amd-gpu-device-plugin cloud-spanner storage-provisioner-rancher metrics-server yakd ingress-dns volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1018 08:31:47.358091   10741 start.go:246] waiting for cluster config update ...
	I1018 08:31:47.358108   10741 start.go:255] writing updated cluster config ...
	I1018 08:31:47.358390   10741 ssh_runner.go:195] Run: rm -f paused
	I1018 08:31:47.362481   10741 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 08:31:47.366191   10741 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jc8rc" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:31:47.370452   10741 pod_ready.go:94] pod "coredns-66bc5c9577-jc8rc" is "Ready"
	I1018 08:31:47.370480   10741 pod_ready.go:86] duration metric: took 4.266473ms for pod "coredns-66bc5c9577-jc8rc" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:31:47.372337   10741 pod_ready.go:83] waiting for pod "etcd-addons-757656" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:31:47.376577   10741 pod_ready.go:94] pod "etcd-addons-757656" is "Ready"
	I1018 08:31:47.376600   10741 pod_ready.go:86] duration metric: took 4.220256ms for pod "etcd-addons-757656" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:31:47.378896   10741 pod_ready.go:83] waiting for pod "kube-apiserver-addons-757656" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:31:47.384306   10741 pod_ready.go:94] pod "kube-apiserver-addons-757656" is "Ready"
	I1018 08:31:47.384337   10741 pod_ready.go:86] duration metric: took 5.420435ms for pod "kube-apiserver-addons-757656" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:31:47.386423   10741 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-757656" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:31:47.766295   10741 pod_ready.go:94] pod "kube-controller-manager-addons-757656" is "Ready"
	I1018 08:31:47.766326   10741 pod_ready.go:86] duration metric: took 379.881197ms for pod "kube-controller-manager-addons-757656" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:31:47.966387   10741 pod_ready.go:83] waiting for pod "kube-proxy-gw6hz" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:31:48.366197   10741 pod_ready.go:94] pod "kube-proxy-gw6hz" is "Ready"
	I1018 08:31:48.366224   10741 pod_ready.go:86] duration metric: took 399.813712ms for pod "kube-proxy-gw6hz" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:31:48.566378   10741 pod_ready.go:83] waiting for pod "kube-scheduler-addons-757656" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:31:48.966382   10741 pod_ready.go:94] pod "kube-scheduler-addons-757656" is "Ready"
	I1018 08:31:48.966410   10741 pod_ready.go:86] duration metric: took 400.005699ms for pod "kube-scheduler-addons-757656" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:31:48.966421   10741 pod_ready.go:40] duration metric: took 1.603909567s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 08:31:49.010561   10741 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 08:31:49.012649   10741 out.go:179] * Done! kubectl is now configured to use "addons-757656" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 08:31:33 addons-757656 crio[783]: time="2025-10-18T08:31:33.239908428Z" level=info msg="Starting container: 4790d2a16058f3034a1b1e4ae855894aab7d7f3d7c610e86af4396f6d3498080" id=ed583d91-db67-4c1d-815a-e1a263a91162 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 08:31:33 addons-757656 crio[783]: time="2025-10-18T08:31:33.242306987Z" level=info msg="Started container" PID=6226 containerID=4790d2a16058f3034a1b1e4ae855894aab7d7f3d7c610e86af4396f6d3498080 description=kube-system/csi-hostpathplugin-cdc5c/csi-snapshotter id=ed583d91-db67-4c1d-815a-e1a263a91162 name=/runtime.v1.RuntimeService/StartContainer sandboxID=28f2be34559e579f4d6524799a1421bef848b387dd8ee2c8116f3a2f6e8f70fa
	Oct 18 08:31:49 addons-757656 crio[783]: time="2025-10-18T08:31:49.832254215Z" level=info msg="Running pod sandbox: default/busybox/POD" id=6a8ad9f1-fa64-4935-8a56-503675dff9c1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 08:31:49 addons-757656 crio[783]: time="2025-10-18T08:31:49.832386671Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 08:31:49 addons-757656 crio[783]: time="2025-10-18T08:31:49.838846826Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:1aa2cceba9c59a32cb7f8a4f0726cf21fe661a7a7f16896e87abe3b5168fefed UID:a3b7e68b-5d74-4895-bd87-cf2c9aaf93c1 NetNS:/var/run/netns/ffc6f478-61ec-444c-98db-e61fb8212b47 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008ab68}] Aliases:map[]}"
	Oct 18 08:31:49 addons-757656 crio[783]: time="2025-10-18T08:31:49.838876422Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 18 08:31:49 addons-757656 crio[783]: time="2025-10-18T08:31:49.848995298Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:1aa2cceba9c59a32cb7f8a4f0726cf21fe661a7a7f16896e87abe3b5168fefed UID:a3b7e68b-5d74-4895-bd87-cf2c9aaf93c1 NetNS:/var/run/netns/ffc6f478-61ec-444c-98db-e61fb8212b47 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008ab68}] Aliases:map[]}"
	Oct 18 08:31:49 addons-757656 crio[783]: time="2025-10-18T08:31:49.849129008Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 18 08:31:49 addons-757656 crio[783]: time="2025-10-18T08:31:49.849978611Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 18 08:31:49 addons-757656 crio[783]: time="2025-10-18T08:31:49.850811091Z" level=info msg="Ran pod sandbox 1aa2cceba9c59a32cb7f8a4f0726cf21fe661a7a7f16896e87abe3b5168fefed with infra container: default/busybox/POD" id=6a8ad9f1-fa64-4935-8a56-503675dff9c1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 08:31:49 addons-757656 crio[783]: time="2025-10-18T08:31:49.851972693Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=89312b49-ed2a-406c-afe4-08cbac589354 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 08:31:49 addons-757656 crio[783]: time="2025-10-18T08:31:49.852110568Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=89312b49-ed2a-406c-afe4-08cbac589354 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 08:31:49 addons-757656 crio[783]: time="2025-10-18T08:31:49.852144253Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=89312b49-ed2a-406c-afe4-08cbac589354 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 08:31:49 addons-757656 crio[783]: time="2025-10-18T08:31:49.852794914Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=49ec122e-4c37-4e7b-9953-911bdcb4047d name=/runtime.v1.ImageService/PullImage
	Oct 18 08:31:49 addons-757656 crio[783]: time="2025-10-18T08:31:49.854260513Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 18 08:31:50 addons-757656 crio[783]: time="2025-10-18T08:31:50.464787084Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=49ec122e-4c37-4e7b-9953-911bdcb4047d name=/runtime.v1.ImageService/PullImage
	Oct 18 08:31:50 addons-757656 crio[783]: time="2025-10-18T08:31:50.465432574Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e5609832-20c3-412e-bcec-5a71af8c2065 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 08:31:50 addons-757656 crio[783]: time="2025-10-18T08:31:50.46699449Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e6b0d0fb-a04f-4997-bf05-10546532bf8a name=/runtime.v1.ImageService/ImageStatus
	Oct 18 08:31:50 addons-757656 crio[783]: time="2025-10-18T08:31:50.471021532Z" level=info msg="Creating container: default/busybox/busybox" id=85413222-c05d-403f-a580-c51edb397001 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 08:31:50 addons-757656 crio[783]: time="2025-10-18T08:31:50.471687165Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 08:31:50 addons-757656 crio[783]: time="2025-10-18T08:31:50.477587104Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 08:31:50 addons-757656 crio[783]: time="2025-10-18T08:31:50.478167827Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 08:31:50 addons-757656 crio[783]: time="2025-10-18T08:31:50.516646965Z" level=info msg="Created container f533f115801c80350b89591e680dff96a8f21cb31e7a10e863c2a8e0d5fbf18e: default/busybox/busybox" id=85413222-c05d-403f-a580-c51edb397001 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 08:31:50 addons-757656 crio[783]: time="2025-10-18T08:31:50.517297476Z" level=info msg="Starting container: f533f115801c80350b89591e680dff96a8f21cb31e7a10e863c2a8e0d5fbf18e" id=38893962-6d41-4d18-b57a-370afe8e7dfa name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 08:31:50 addons-757656 crio[783]: time="2025-10-18T08:31:50.51914945Z" level=info msg="Started container" PID=6447 containerID=f533f115801c80350b89591e680dff96a8f21cb31e7a10e863c2a8e0d5fbf18e description=default/busybox/busybox id=38893962-6d41-4d18-b57a-370afe8e7dfa name=/runtime.v1.RuntimeService/StartContainer sandboxID=1aa2cceba9c59a32cb7f8a4f0726cf21fe661a7a7f16896e87abe3b5168fefed
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	f533f115801c8       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          7 seconds ago        Running             busybox                                  0                   1aa2cceba9c59       busybox                                     default
	4790d2a16058f       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          25 seconds ago       Running             csi-snapshotter                          0                   28f2be34559e5       csi-hostpathplugin-cdc5c                    kube-system
	48cc0cbf614ba       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          26 seconds ago       Running             csi-provisioner                          0                   28f2be34559e5       csi-hostpathplugin-cdc5c                    kube-system
	c8379247a51b8       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            27 seconds ago       Running             liveness-probe                           0                   28f2be34559e5       csi-hostpathplugin-cdc5c                    kube-system
	941a165621387       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           27 seconds ago       Running             hostpath                                 0                   28f2be34559e5       csi-hostpathplugin-cdc5c                    kube-system
	0541dfc3f0d13       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 28 seconds ago       Running             gcp-auth                                 0                   8185b4418ee67       gcp-auth-78565c9fb4-z25bv                   gcp-auth
	a4ce05cad528b       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb                            29 seconds ago       Running             gadget                                   0                   2e64eeb086c88       gadget-km4ch                                gadget
	14afe1f1816da       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                32 seconds ago       Running             node-driver-registrar                    0                   28f2be34559e5       csi-hostpathplugin-cdc5c                    kube-system
	8f9e1307c9d0a       registry.k8s.io/ingress-nginx/controller@sha256:7b4073fc95e078d863c0b0b08deb72e01d2cf629e2156822bcd394fc2bcd8e83                             33 seconds ago       Running             controller                               0                   c9b5d5846a39a       ingress-nginx-controller-675c5ddd98-9bx4w   ingress-nginx
	440454c3da4c0       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              37 seconds ago       Running             registry-proxy                           0                   851baf9bee8f9       registry-proxy-7g848                        kube-system
	21d5b8b41d5fc       08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2                                                                             37 seconds ago       Exited              patch                                    2                   ad4eb3f1b9439       gcp-auth-certs-patch-hwbx5                  gcp-auth
	bbb9cb4aa33f4       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              38 seconds ago       Running             csi-resizer                              0                   66f0e5ae8783f       csi-hostpath-resizer-0                      kube-system
	a1124cfdace32       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   40 seconds ago       Exited              create                                   0                   13bacd5c8e794       gcp-auth-certs-create-5rqrw                 gcp-auth
	adf0e53cd5b4b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   40 seconds ago       Exited              patch                                    0                   3a632b124a208       ingress-nginx-admission-patch-4jmq4         ingress-nginx
	0ad5891c05dff       nvcr.io/nvidia/k8s-device-plugin@sha256:ad155f1089b64673c75b2f39258f0791cbad6d3011419726ec605196981e1c32                                     40 seconds ago       Running             nvidia-device-plugin-ctr                 0                   97a30391685b9       nvidia-device-plugin-daemonset-bnzlc        kube-system
	f6e03a69b7bc4       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     48 seconds ago       Running             amd-gpu-device-plugin                    0                   247281eda0cae       amd-gpu-device-plugin-v82lt                 kube-system
	4171876174cfa       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   49 seconds ago       Running             csi-external-health-monitor-controller   0                   28f2be34559e5       csi-hostpathplugin-cdc5c                    kube-system
	813e46f6ecd6f       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      50 seconds ago       Running             volume-snapshot-controller               0                   b3c8ad4e98913       snapshot-controller-7d9fbc56b8-7zt7h        kube-system
	4eb2f9d88de24       gcr.io/cloud-spanner-emulator/emulator@sha256:66030f526b1bc41f0d2027b496fd8fa53f620bf9d5a18baa07990e67f1a20237                               51 seconds ago       Running             cloud-spanner-emulator                   0                   c07351a604a5b       cloud-spanner-emulator-86bd5cbb97-x5chs     default
	7b9675803326b       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              53 seconds ago       Running             yakd                                     0                   c15a4e7496003       yakd-dashboard-5ff678cb9-v82m4              yakd-dashboard
	ca11cfbc41b51       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   56 seconds ago       Exited              create                                   0                   e921d9185fb60       ingress-nginx-admission-create-s2qbg        ingress-nginx
	ab63780aacfa0       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               57 seconds ago       Running             minikube-ingress-dns                     0                   176cb0a01af6b       kube-ingress-dns-minikube                   kube-system
	42811c9c9f88b       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             About a minute ago   Running             local-path-provisioner                   0                   2209bdb6af913       local-path-provisioner-648f6765c9-f8n49     local-path-storage
	72deecf66bdbe       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             About a minute ago   Running             csi-attacher                             0                   2832ed0ad5d31       csi-hostpath-attacher-0                     kube-system
	cd1b8704f38dd       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      About a minute ago   Running             volume-snapshot-controller               0                   b28fa2823e462       snapshot-controller-7d9fbc56b8-nbz64        kube-system
	c08a0e9528c61       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           About a minute ago   Running             registry                                 0                   52283bc5a5a2f       registry-6b586f9694-lbbgc                   kube-system
	c216c132bff88       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        About a minute ago   Running             metrics-server                           0                   e09a9bd5b3438       metrics-server-85b7d694d7-vl9c2             kube-system
	dbf5c6e8579fb       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             About a minute ago   Running             coredns                                  0                   668e1c33dd1aa       coredns-66bc5c9577-jc8rc                    kube-system
	7189be801872d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             About a minute ago   Running             storage-provisioner                      0                   fe884349ac819       storage-provisioner                         kube-system
	6c971e87dacce       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             About a minute ago   Running             kube-proxy                               0                   d8fa2f4be30c4       kube-proxy-gw6hz                            kube-system
	1511480aef50d       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             About a minute ago   Running             kindnet-cni                              0                   54168cb719e6a       kindnet-tdxms                               kube-system
	52adc977887b4       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             About a minute ago   Running             kube-controller-manager                  0                   f135fab3e61a1       kube-controller-manager-addons-757656       kube-system
	4faa6d23dba1b       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             About a minute ago   Running             kube-apiserver                           0                   bddbd7fc6f4aa       kube-apiserver-addons-757656                kube-system
	717994737c9e9       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             About a minute ago   Running             kube-scheduler                           0                   6c3b44c81dc7d       kube-scheduler-addons-757656                kube-system
	56d69d63fccc1       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             About a minute ago   Running             etcd                                     0                   9765f88d8612b       etcd-addons-757656                          kube-system
	
	
	==> coredns [dbf5c6e8579fb377ddb314d3b43db1406eed33b898a34570445f3dbda1c63266] <==
	[INFO] 10.244.0.19:59427 - 23242 "AAAA IN registry.kube-system.svc.cluster.local.local. udp 62 false 512" NXDOMAIN qr,rd,ra 62 0.003021363s
	[INFO] 10.244.0.19:46338 - 58835 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000058931s
	[INFO] 10.244.0.19:46338 - 58578 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000086611s
	[INFO] 10.244.0.19:34082 - 10721 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000067179s
	[INFO] 10.244.0.19:34082 - 10475 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000091322s
	[INFO] 10.244.0.19:49931 - 63089 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000062694s
	[INFO] 10.244.0.19:49931 - 62837 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000089737s
	[INFO] 10.244.0.19:60021 - 22195 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000105222s
	[INFO] 10.244.0.19:60021 - 21970 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000087243s
	[INFO] 10.244.0.22:52491 - 34641 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000186923s
	[INFO] 10.244.0.22:52945 - 63777 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000284565s
	[INFO] 10.244.0.22:51771 - 44107 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000105243s
	[INFO] 10.244.0.22:54841 - 37030 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000109355s
	[INFO] 10.244.0.22:60902 - 59514 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000083297s
	[INFO] 10.244.0.22:33708 - 51597 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000125038s
	[INFO] 10.244.0.22:47627 - 30229 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.00414916s
	[INFO] 10.244.0.22:58016 - 46237 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.004344775s
	[INFO] 10.244.0.22:50686 - 8596 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.005822763s
	[INFO] 10.244.0.22:42546 - 16831 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.009023751s
	[INFO] 10.244.0.22:52707 - 9375 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004641527s
	[INFO] 10.244.0.22:36457 - 16383 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005984458s
	[INFO] 10.244.0.22:55464 - 29157 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004662359s
	[INFO] 10.244.0.22:34601 - 19730 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.00521184s
	[INFO] 10.244.0.22:44135 - 11561 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001151193s
	[INFO] 10.244.0.22:60266 - 50084 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002491243s
	
	
	==> describe nodes <==
	Name:               addons-757656
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-757656
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a39cecdc22b5fb611b15c7501c7459c3b4d2820
	                    minikube.k8s.io/name=addons-757656
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T08_30_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-757656
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-757656"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 08:30:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-757656
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 08:31:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 08:31:34 +0000   Sat, 18 Oct 2025 08:29:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 08:31:34 +0000   Sat, 18 Oct 2025 08:29:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 08:31:34 +0000   Sat, 18 Oct 2025 08:29:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 08:31:34 +0000   Sat, 18 Oct 2025 08:30:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-757656
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                cfce5e10-5e2d-40cf-8446-b5fe69082a53
	  Boot ID:                    e8d7ef1f-87bb-488c-8381-e18fe85b484f
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  default                     cloud-spanner-emulator-86bd5cbb97-x5chs      0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  gadget                      gadget-km4ch                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  gcp-auth                    gcp-auth-78565c9fb4-z25bv                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-9bx4w    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         109s
	  kube-system                 amd-gpu-device-plugin-v82lt                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         69s
	  kube-system                 coredns-66bc5c9577-jc8rc                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     110s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 csi-hostpathplugin-cdc5c                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         69s
	  kube-system                 etcd-addons-757656                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         116s
	  kube-system                 kindnet-tdxms                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-addons-757656                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-addons-757656        200m (2%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-proxy-gw6hz                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-addons-757656                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 metrics-server-85b7d694d7-vl9c2              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         109s
	  kube-system                 nvidia-device-plugin-daemonset-bnzlc         0 (0%)        0 (0%)      0 (0%)           0 (0%)         69s
	  kube-system                 registry-6b586f9694-lbbgc                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 registry-creds-764b6fb674-h7xh9              0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 registry-proxy-7g848                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         69s
	  kube-system                 snapshot-controller-7d9fbc56b8-7zt7h         0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 snapshot-controller-7d9fbc56b8-nbz64         0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  local-path-storage          local-path-provisioner-648f6765c9-f8n49      0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-v82m4               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     109s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 108s  kube-proxy       
	  Normal  Starting                 116s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  116s  kubelet          Node addons-757656 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    116s  kubelet          Node addons-757656 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     116s  kubelet          Node addons-757656 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           111s  node-controller  Node addons-757656 event: Registered Node addons-757656 in Controller
	  Normal  NodeReady                69s   kubelet          Node addons-757656 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct18 08:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001774] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.091012] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.407566] i8042: Warning: Keylock active
	[  +0.012699] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004826] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.001000] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.001139] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.001062] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.001138] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.001047] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.001176] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.001107] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.001097] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.511423] block sda: the capability attribute has been deprecated.
	[  +0.101295] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028366] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.196963] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [56d69d63fccc147fc338479d722142a993f3013be2c188974a95d01e019bcb14] <==
	{"level":"warn","ts":"2025-10-18T08:29:59.591564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:29:59.597548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:29:59.603984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:29:59.610520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:29:59.619588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:29:59.626700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:29:59.634245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:29:59.641845Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:29:59.648926Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:29:59.655856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:29:59.662442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:29:59.669912Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:29:59.677101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:29:59.691439Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:29:59.703560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:29:59.707897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:29:59.721037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:29:59.768492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:30:10.612689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:30:10.619203Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:30:37.173777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:30:37.193032Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:30:37.200641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47566","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T08:31:04.763809Z","caller":"traceutil/trace.go:172","msg":"trace[186939196] transaction","detail":"{read_only:false; response_revision:1024; number_of_response:1; }","duration":"132.771865ms","start":"2025-10-18T08:31:04.631019Z","end":"2025-10-18T08:31:04.763791Z","steps":["trace[186939196] 'process raft request'  (duration: 77.482815ms)","trace[186939196] 'compare'  (duration: 55.20424ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T08:31:04.793729Z","caller":"traceutil/trace.go:172","msg":"trace[1138554988] transaction","detail":"{read_only:false; response_revision:1025; number_of_response:1; }","duration":"128.09833ms","start":"2025-10-18T08:31:04.665610Z","end":"2025-10-18T08:31:04.793709Z","steps":["trace[1138554988] 'process raft request'  (duration: 127.994937ms)"],"step_count":1}
	
	
	==> gcp-auth [0541dfc3f0d13512640cbd84c6b19fd0626e07b1e5447e26f099f37e1e6efdf6] <==
	2025/10/18 08:31:29 GCP Auth Webhook started!
	2025/10/18 08:31:49 Ready to marshal response ...
	2025/10/18 08:31:49 Ready to write response ...
	2025/10/18 08:31:49 Ready to marshal response ...
	2025/10/18 08:31:49 Ready to write response ...
	2025/10/18 08:31:49 Ready to marshal response ...
	2025/10/18 08:31:49 Ready to write response ...
	
	
	==> kernel <==
	 08:31:58 up 14 min,  0 user,  load average: 1.35, 0.67, 0.26
	Linux addons-757656 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1511480aef50d5e66eab7e6f72a8a21ffb3a3ad656dc0ce5a729ee3afe26e9c7] <==
	I1018 08:30:09.520376       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 08:30:09.520547       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 08:30:09.520561       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 08:30:09.520967       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1018 08:30:39.521368       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1018 08:30:39.521417       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1018 08:30:39.521417       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1018 08:30:39.522326       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1018 08:30:41.021679       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 08:30:41.021713       1 metrics.go:72] Registering metrics
	I1018 08:30:41.021785       1 controller.go:711] "Syncing nftables rules"
	I1018 08:30:49.527313       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:30:49.527407       1 main.go:301] handling current node
	I1018 08:30:59.521419       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:30:59.521461       1 main.go:301] handling current node
	I1018 08:31:09.520282       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:31:09.520317       1 main.go:301] handling current node
	I1018 08:31:19.521066       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:31:19.521106       1 main.go:301] handling current node
	I1018 08:31:29.520277       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:31:29.520324       1 main.go:301] handling current node
	I1018 08:31:39.520449       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:31:39.520484       1 main.go:301] handling current node
	I1018 08:31:49.523431       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:31:49.523576       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4faa6d23dba1bdc8c7eba89649f47072c5b426937bf2a2b10aa4b52d39f44cf8] <==
	I1018 08:30:16.291249       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.101.138.23"}
	W1018 08:30:37.166969       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1018 08:30:37.173732       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1018 08:30:37.192994       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1018 08:30:37.200608       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1018 08:30:49.731228       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.101.138.23:443: connect: connection refused
	E1018 08:30:49.731373       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.101.138.23:443: connect: connection refused" logger="UnhandledError"
	W1018 08:30:49.731303       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.101.138.23:443: connect: connection refused
	E1018 08:30:49.731756       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.101.138.23:443: connect: connection refused" logger="UnhandledError"
	W1018 08:30:49.748421       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.101.138.23:443: connect: connection refused
	E1018 08:30:49.748570       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.101.138.23:443: connect: connection refused" logger="UnhandledError"
	W1018 08:30:49.751954       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.101.138.23:443: connect: connection refused
	E1018 08:30:49.751987       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.101.138.23:443: connect: connection refused" logger="UnhandledError"
	E1018 08:30:52.597396       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.150.136:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.99.150.136:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.99.150.136:443: connect: connection refused" logger="UnhandledError"
	W1018 08:30:52.597620       1 handler_proxy.go:99] no RequestInfo found in the context
	E1018 08:30:52.597690       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1018 08:30:52.597993       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.150.136:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.99.150.136:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.99.150.136:443: connect: connection refused" logger="UnhandledError"
	E1018 08:30:52.603482       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.150.136:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.99.150.136:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.99.150.136:443: connect: connection refused" logger="UnhandledError"
	E1018 08:30:52.624009       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.150.136:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.99.150.136:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.99.150.136:443: connect: connection refused" logger="UnhandledError"
	I1018 08:30:52.691154       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1018 08:31:56.676227       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:57468: use of closed network connection
	E1018 08:31:56.829900       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:57492: use of closed network connection
	
	
	==> kube-controller-manager [52adc977887b4b184292fed9d6952cb67b4fb667289dd8df966abb85c03aaa46] <==
	I1018 08:30:07.150319       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 08:30:07.150272       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 08:30:07.150336       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 08:30:07.150365       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1018 08:30:07.150420       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 08:30:07.150678       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 08:30:07.154907       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 08:30:07.154965       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 08:30:07.156140       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 08:30:07.162397       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1018 08:30:07.162476       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 08:30:07.162515       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 08:30:07.162520       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 08:30:07.162524       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 08:30:07.168894       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-757656" podCIDRs=["10.244.0.0/24"]
	I1018 08:30:07.174117       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1018 08:30:09.419075       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1018 08:30:37.160789       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1018 08:30:37.160911       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1018 08:30:37.160960       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1018 08:30:37.183640       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1018 08:30:37.187317       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1018 08:30:37.261822       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 08:30:37.288067       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 08:30:52.104954       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [6c971e87dacce692e9ba51b9df623358656653ca81349c910a48ee4deca9701c] <==
	I1018 08:30:09.228223       1 server_linux.go:53] "Using iptables proxy"
	I1018 08:30:09.328988       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 08:30:09.430082       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 08:30:09.430129       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1018 08:30:09.430217       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 08:30:09.526500       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 08:30:09.526570       1 server_linux.go:132] "Using iptables Proxier"
	I1018 08:30:09.536489       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 08:30:09.543930       1 server.go:527] "Version info" version="v1.34.1"
	I1018 08:30:09.544089       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 08:30:09.549479       1 config.go:309] "Starting node config controller"
	I1018 08:30:09.549501       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 08:30:09.549510       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 08:30:09.550043       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 08:30:09.550303       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 08:30:09.550236       1 config.go:106] "Starting endpoint slice config controller"
	I1018 08:30:09.550400       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 08:30:09.550629       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 08:30:09.550332       1 config.go:200] "Starting service config controller"
	I1018 08:30:09.552481       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 08:30:09.651056       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 08:30:09.653514       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [717994737c9e9e736b5e73abe6513db6ce8ecf19404100a264fa9c13ee71f047] <==
	E1018 08:30:00.183376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 08:30:00.183415       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 08:30:00.183433       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 08:30:00.184115       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 08:30:00.184152       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 08:30:00.184291       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 08:30:00.184450       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 08:30:00.184497       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 08:30:00.183405       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 08:30:00.184540       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 08:30:00.184649       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 08:30:00.184744       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 08:30:00.993035       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 08:30:01.062377       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 08:30:01.074615       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 08:30:01.096153       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 08:30:01.160213       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 08:30:01.186304       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1018 08:30:01.295719       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 08:30:01.305684       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 08:30:01.330915       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 08:30:01.336022       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 08:30:01.336081       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 08:30:01.430742       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1018 08:30:04.272611       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 08:31:20 addons-757656 kubelet[1313]: I1018 08:31:20.696780    1313 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13bacd5c8e794e7458e00f2321f9a8db0e11bab182fd41f83eb4a8bac714f9a9"
	Oct 18 08:31:20 addons-757656 kubelet[1313]: I1018 08:31:20.698545    1313 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a632b124a208dd0acf35bf074a1fd9b969dac62e84bed43742e080c3f7b9513"
	Oct 18 08:31:21 addons-757656 kubelet[1313]: E1018 08:31:21.656696    1313 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Oct 18 08:31:21 addons-757656 kubelet[1313]: E1018 08:31:21.656804    1313 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef91ed70-4df9-4356-8c81-61877956a49a-gcr-creds podName:ef91ed70-4df9-4356-8c81-61877956a49a nodeName:}" failed. No retries permitted until 2025-10-18 08:31:53.656781102 +0000 UTC m=+111.308997569 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/ef91ed70-4df9-4356-8c81-61877956a49a-gcr-creds") pod "registry-creds-764b6fb674-h7xh9" (UID: "ef91ed70-4df9-4356-8c81-61877956a49a") : secret "registry-creds-gcr" not found
	Oct 18 08:31:21 addons-757656 kubelet[1313]: I1018 08:31:21.706121    1313 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-7g848" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 08:31:21 addons-757656 kubelet[1313]: I1018 08:31:21.711662    1313 scope.go:117] "RemoveContainer" containerID="f29f93c00b910ca807a488536fed5a6492efdfea72f19aa4be2b65bd2a953090"
	Oct 18 08:31:21 addons-757656 kubelet[1313]: I1018 08:31:21.728892    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-7g848" podStartSLOduration=2.040376741 podStartE2EDuration="32.728870195s" podCreationTimestamp="2025-10-18 08:30:49 +0000 UTC" firstStartedPulling="2025-10-18 08:30:50.199822641 +0000 UTC m=+47.852039086" lastFinishedPulling="2025-10-18 08:31:20.888316077 +0000 UTC m=+78.540532540" observedRunningTime="2025-10-18 08:31:21.727075826 +0000 UTC m=+79.379292298" watchObservedRunningTime="2025-10-18 08:31:21.728870195 +0000 UTC m=+79.381086662"
	Oct 18 08:31:22 addons-757656 kubelet[1313]: I1018 08:31:22.717045    1313 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-7g848" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 08:31:22 addons-757656 kubelet[1313]: I1018 08:31:22.867689    1313 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tz86f\" (UniqueName: \"kubernetes.io/projected/b5c06f38-04c2-4f62-a5d6-49e0e0fdda31-kube-api-access-tz86f\") pod \"b5c06f38-04c2-4f62-a5d6-49e0e0fdda31\" (UID: \"b5c06f38-04c2-4f62-a5d6-49e0e0fdda31\") "
	Oct 18 08:31:22 addons-757656 kubelet[1313]: I1018 08:31:22.870257    1313 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5c06f38-04c2-4f62-a5d6-49e0e0fdda31-kube-api-access-tz86f" (OuterVolumeSpecName: "kube-api-access-tz86f") pod "b5c06f38-04c2-4f62-a5d6-49e0e0fdda31" (UID: "b5c06f38-04c2-4f62-a5d6-49e0e0fdda31"). InnerVolumeSpecName "kube-api-access-tz86f". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 18 08:31:22 addons-757656 kubelet[1313]: I1018 08:31:22.968296    1313 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tz86f\" (UniqueName: \"kubernetes.io/projected/b5c06f38-04c2-4f62-a5d6-49e0e0fdda31-kube-api-access-tz86f\") on node \"addons-757656\" DevicePath \"\""
	Oct 18 08:31:23 addons-757656 kubelet[1313]: I1018 08:31:23.721332    1313 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad4eb3f1b94390950a9dde22eee5f55e3993b035b09daa43a13b38bbf0d8ab07"
	Oct 18 08:31:25 addons-757656 kubelet[1313]: I1018 08:31:25.741205    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-675c5ddd98-9bx4w" podStartSLOduration=57.957179242 podStartE2EDuration="1m16.741187449s" podCreationTimestamp="2025-10-18 08:30:09 +0000 UTC" firstStartedPulling="2025-10-18 08:31:06.008059425 +0000 UTC m=+63.660275874" lastFinishedPulling="2025-10-18 08:31:24.79206762 +0000 UTC m=+82.444284081" observedRunningTime="2025-10-18 08:31:25.740660524 +0000 UTC m=+83.392876992" watchObservedRunningTime="2025-10-18 08:31:25.741187449 +0000 UTC m=+83.393403918"
	Oct 18 08:31:28 addons-757656 kubelet[1313]: I1018 08:31:28.757824    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-km4ch" podStartSLOduration=67.830145948 podStartE2EDuration="1m19.757806808s" podCreationTimestamp="2025-10-18 08:30:09 +0000 UTC" firstStartedPulling="2025-10-18 08:31:16.556098431 +0000 UTC m=+74.208314889" lastFinishedPulling="2025-10-18 08:31:28.483759303 +0000 UTC m=+86.135975749" observedRunningTime="2025-10-18 08:31:28.757324062 +0000 UTC m=+86.409540529" watchObservedRunningTime="2025-10-18 08:31:28.757806808 +0000 UTC m=+86.410023275"
	Oct 18 08:31:30 addons-757656 kubelet[1313]: I1018 08:31:30.763783    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-z25bv" podStartSLOduration=67.794084568 podStartE2EDuration="1m14.76376299s" podCreationTimestamp="2025-10-18 08:30:16 +0000 UTC" firstStartedPulling="2025-10-18 08:31:22.799049669 +0000 UTC m=+80.451266119" lastFinishedPulling="2025-10-18 08:31:29.768728079 +0000 UTC m=+87.420944541" observedRunningTime="2025-10-18 08:31:30.762729272 +0000 UTC m=+88.414945740" watchObservedRunningTime="2025-10-18 08:31:30.76376299 +0000 UTC m=+88.415979456"
	Oct 18 08:31:31 addons-757656 kubelet[1313]: I1018 08:31:31.486919    1313 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Oct 18 08:31:31 addons-757656 kubelet[1313]: I1018 08:31:31.486966    1313 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Oct 18 08:31:33 addons-757656 kubelet[1313]: I1018 08:31:33.790051    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-cdc5c" podStartSLOduration=1.7686321440000001 podStartE2EDuration="44.790029484s" podCreationTimestamp="2025-10-18 08:30:49 +0000 UTC" firstStartedPulling="2025-10-18 08:30:50.16542989 +0000 UTC m=+47.817646341" lastFinishedPulling="2025-10-18 08:31:33.186827232 +0000 UTC m=+90.839043681" observedRunningTime="2025-10-18 08:31:33.790012776 +0000 UTC m=+91.442229256" watchObservedRunningTime="2025-10-18 08:31:33.790029484 +0000 UTC m=+91.442245951"
	Oct 18 08:31:49 addons-757656 kubelet[1313]: I1018 08:31:49.574671    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/a3b7e68b-5d74-4895-bd87-cf2c9aaf93c1-gcp-creds\") pod \"busybox\" (UID: \"a3b7e68b-5d74-4895-bd87-cf2c9aaf93c1\") " pod="default/busybox"
	Oct 18 08:31:49 addons-757656 kubelet[1313]: I1018 08:31:49.574759    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2d28\" (UniqueName: \"kubernetes.io/projected/a3b7e68b-5d74-4895-bd87-cf2c9aaf93c1-kube-api-access-v2d28\") pod \"busybox\" (UID: \"a3b7e68b-5d74-4895-bd87-cf2c9aaf93c1\") " pod="default/busybox"
	Oct 18 08:31:50 addons-757656 kubelet[1313]: I1018 08:31:50.430824    1313 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e3e54c4-0679-4e62-9d40-d77f38530950" path="/var/lib/kubelet/pods/7e3e54c4-0679-4e62-9d40-d77f38530950/volumes"
	Oct 18 08:31:50 addons-757656 kubelet[1313]: I1018 08:31:50.851762    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.23785978 podStartE2EDuration="1.851740704s" podCreationTimestamp="2025-10-18 08:31:49 +0000 UTC" firstStartedPulling="2025-10-18 08:31:49.852442797 +0000 UTC m=+107.504659256" lastFinishedPulling="2025-10-18 08:31:50.466323726 +0000 UTC m=+108.118540180" observedRunningTime="2025-10-18 08:31:50.850665916 +0000 UTC m=+108.502882383" watchObservedRunningTime="2025-10-18 08:31:50.851740704 +0000 UTC m=+108.503957173"
	Oct 18 08:31:53 addons-757656 kubelet[1313]: E1018 08:31:53.705745    1313 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Oct 18 08:31:53 addons-757656 kubelet[1313]: E1018 08:31:53.705830    1313 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef91ed70-4df9-4356-8c81-61877956a49a-gcr-creds podName:ef91ed70-4df9-4356-8c81-61877956a49a nodeName:}" failed. No retries permitted until 2025-10-18 08:32:57.705815501 +0000 UTC m=+175.358031959 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/ef91ed70-4df9-4356-8c81-61877956a49a-gcr-creds") pod "registry-creds-764b6fb674-h7xh9" (UID: "ef91ed70-4df9-4356-8c81-61877956a49a") : secret "registry-creds-gcr" not found
	Oct 18 08:31:54 addons-757656 kubelet[1313]: I1018 08:31:54.431419    1313 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5c06f38-04c2-4f62-a5d6-49e0e0fdda31" path="/var/lib/kubelet/pods/b5c06f38-04c2-4f62-a5d6-49e0e0fdda31/volumes"
	
	
	==> storage-provisioner [7189be801872dca3adedf930d1116a7930ab711e9e199fc588f8ad5ec67c23c9] <==
	W1018 08:31:34.533287       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:31:36.536166       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:31:36.539979       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:31:38.543307       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:31:38.547332       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:31:40.550950       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:31:40.554895       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:31:42.557779       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:31:42.565365       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:31:44.568859       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:31:44.573845       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:31:46.576781       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:31:46.580549       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:31:48.583823       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:31:48.588667       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:31:50.591713       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:31:50.595455       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:31:52.597838       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:31:52.601670       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:31:54.604331       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:31:54.609329       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:31:56.612107       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:31:56.616176       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:31:58.620022       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:31:58.623938       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-757656 -n addons-757656
helpers_test.go:269: (dbg) Run:  kubectl --context addons-757656 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-s2qbg ingress-nginx-admission-patch-4jmq4 registry-creds-764b6fb674-h7xh9
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-757656 describe pod ingress-nginx-admission-create-s2qbg ingress-nginx-admission-patch-4jmq4 registry-creds-764b6fb674-h7xh9
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-757656 describe pod ingress-nginx-admission-create-s2qbg ingress-nginx-admission-patch-4jmq4 registry-creds-764b6fb674-h7xh9: exit status 1 (61.194755ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-s2qbg" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-4jmq4" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-h7xh9" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-757656 describe pod ingress-nginx-admission-create-s2qbg ingress-nginx-admission-patch-4jmq4 registry-creds-764b6fb674-h7xh9: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-757656 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-757656 addons disable headlamp --alsologtostderr -v=1: exit status 11 (229.533927ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 08:31:59.428907   19995 out.go:360] Setting OutFile to fd 1 ...
	I1018 08:31:59.429212   19995 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:31:59.429224   19995 out.go:374] Setting ErrFile to fd 2...
	I1018 08:31:59.429231   19995 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:31:59.429506   19995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-5897/.minikube/bin
	I1018 08:31:59.429778   19995 mustload.go:65] Loading cluster: addons-757656
	I1018 08:31:59.430147   19995 config.go:182] Loaded profile config "addons-757656": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:31:59.430165   19995 addons.go:606] checking whether the cluster is paused
	I1018 08:31:59.430267   19995 config.go:182] Loaded profile config "addons-757656": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:31:59.430283   19995 host.go:66] Checking if "addons-757656" exists ...
	I1018 08:31:59.430711   19995 cli_runner.go:164] Run: docker container inspect addons-757656 --format={{.State.Status}}
	I1018 08:31:59.449052   19995 ssh_runner.go:195] Run: systemctl --version
	I1018 08:31:59.449116   19995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:31:59.467620   19995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/addons-757656/id_rsa Username:docker}
	I1018 08:31:59.563047   19995 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 08:31:59.563118   19995 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 08:31:59.592090   19995 cri.go:89] found id: "4790d2a16058f3034a1b1e4ae855894aab7d7f3d7c610e86af4396f6d3498080"
	I1018 08:31:59.592114   19995 cri.go:89] found id: "48cc0cbf614ba386328b79cd306ce1fd90f8b4c338b8eb054421e5183efc5d4e"
	I1018 08:31:59.592120   19995 cri.go:89] found id: "c8379247a51b8242f6cb2cd6503d43ea0ed66dd9900fc8728b601695286a1d0a"
	I1018 08:31:59.592125   19995 cri.go:89] found id: "941a165621387aecf9f61fa5f0858b119aa2452338edbbe4ffbe1cff9b72292f"
	I1018 08:31:59.592129   19995 cri.go:89] found id: "14afe1f1816da18c2ce04153131d2aa122c50659a7faa8f9e40544d725a3d2c7"
	I1018 08:31:59.592135   19995 cri.go:89] found id: "440454c3da4c0302b146f97ed6c6f0e44df0c561a5f8d848e7e81218f08ef6db"
	I1018 08:31:59.592140   19995 cri.go:89] found id: "bbb9cb4aa33f4e4e42c90d9f8b44b4c8f4c50b6a89edc5b1893c52e37b664fed"
	I1018 08:31:59.592145   19995 cri.go:89] found id: "0ad5891c05dff52f7d29bd1edd32ab0a01ccc280a8974a244ec73419bd21a831"
	I1018 08:31:59.592149   19995 cri.go:89] found id: "f6e03a69b7bc41d32daae0fea75627f3e6bab34641aba500b0deec44241fa209"
	I1018 08:31:59.592159   19995 cri.go:89] found id: "4171876174cfa4f01c139bc1155ba660392b57736128ebf7bc1dca331bbcaee4"
	I1018 08:31:59.592169   19995 cri.go:89] found id: "813e46f6ecd6f3f0ed03b73a97ae5413d8bb65920271777404da143c0e902755"
	I1018 08:31:59.592177   19995 cri.go:89] found id: "ab63780aacfa0fb8341e0e937c7631e7eaf6c63690759abd7a5b64b2e83e3368"
	I1018 08:31:59.592181   19995 cri.go:89] found id: "72deecf66bdbeb5deb3d6951223a20fdac95c3fa8a32985ef6454a42357402e1"
	I1018 08:31:59.592188   19995 cri.go:89] found id: "cd1b8704f38dd764c4b72252086f2f53b2d98dc57ff0adbe8d204888170e994c"
	I1018 08:31:59.592191   19995 cri.go:89] found id: "c08a0e9528c61fd1b79ac94a2160e7ee63bc34d3e75541f78b2b3b9028daa8e1"
	I1018 08:31:59.592196   19995 cri.go:89] found id: "c216c132bff884542e58d26afd8de4c6ca39c00b23e91a35690a66be1be95c45"
	I1018 08:31:59.592199   19995 cri.go:89] found id: "dbf5c6e8579fb377ddb314d3b43db1406eed33b898a34570445f3dbda1c63266"
	I1018 08:31:59.592204   19995 cri.go:89] found id: "7189be801872dca3adedf930d1116a7930ab711e9e199fc588f8ad5ec67c23c9"
	I1018 08:31:59.592207   19995 cri.go:89] found id: "6c971e87dacce692e9ba51b9df623358656653ca81349c910a48ee4deca9701c"
	I1018 08:31:59.592209   19995 cri.go:89] found id: "1511480aef50d5e66eab7e6f72a8a21ffb3a3ad656dc0ce5a729ee3afe26e9c7"
	I1018 08:31:59.592238   19995 cri.go:89] found id: "52adc977887b4b184292fed9d6952cb67b4fb667289dd8df966abb85c03aaa46"
	I1018 08:31:59.592248   19995 cri.go:89] found id: "4faa6d23dba1bdc8c7eba89649f47072c5b426937bf2a2b10aa4b52d39f44cf8"
	I1018 08:31:59.592259   19995 cri.go:89] found id: "717994737c9e9e736b5e73abe6513db6ce8ecf19404100a264fa9c13ee71f047"
	I1018 08:31:59.592264   19995 cri.go:89] found id: "56d69d63fccc147fc338479d722142a993f3013be2c188974a95d01e019bcb14"
	I1018 08:31:59.592267   19995 cri.go:89] found id: ""
	I1018 08:31:59.592305   19995 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 08:31:59.606093   19995 out.go:203] 
	W1018 08:31:59.607542   19995 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:31:59Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:31:59Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 08:31:59.607570   19995 out.go:285] * 
	* 
	W1018 08:31:59.610536   19995 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 08:31:59.612123   19995 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-757656 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.55s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-x5chs" [6a753b8d-f12d-4d0d-bcf8-7342986f8cab] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003278889s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-757656 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-757656 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (245.301829ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 08:32:13.607460   21459 out.go:360] Setting OutFile to fd 1 ...
	I1018 08:32:13.607766   21459 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:32:13.607776   21459 out.go:374] Setting ErrFile to fd 2...
	I1018 08:32:13.607781   21459 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:32:13.607988   21459 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-5897/.minikube/bin
	I1018 08:32:13.608239   21459 mustload.go:65] Loading cluster: addons-757656
	I1018 08:32:13.608574   21459 config.go:182] Loaded profile config "addons-757656": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:32:13.608596   21459 addons.go:606] checking whether the cluster is paused
	I1018 08:32:13.608675   21459 config.go:182] Loaded profile config "addons-757656": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:32:13.608687   21459 host.go:66] Checking if "addons-757656" exists ...
	I1018 08:32:13.609044   21459 cli_runner.go:164] Run: docker container inspect addons-757656 --format={{.State.Status}}
	I1018 08:32:13.628269   21459 ssh_runner.go:195] Run: systemctl --version
	I1018 08:32:13.628372   21459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:32:13.648145   21459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/addons-757656/id_rsa Username:docker}
	I1018 08:32:13.749605   21459 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 08:32:13.749686   21459 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 08:32:13.781131   21459 cri.go:89] found id: "4790d2a16058f3034a1b1e4ae855894aab7d7f3d7c610e86af4396f6d3498080"
	I1018 08:32:13.781156   21459 cri.go:89] found id: "48cc0cbf614ba386328b79cd306ce1fd90f8b4c338b8eb054421e5183efc5d4e"
	I1018 08:32:13.781162   21459 cri.go:89] found id: "c8379247a51b8242f6cb2cd6503d43ea0ed66dd9900fc8728b601695286a1d0a"
	I1018 08:32:13.781166   21459 cri.go:89] found id: "941a165621387aecf9f61fa5f0858b119aa2452338edbbe4ffbe1cff9b72292f"
	I1018 08:32:13.781170   21459 cri.go:89] found id: "14afe1f1816da18c2ce04153131d2aa122c50659a7faa8f9e40544d725a3d2c7"
	I1018 08:32:13.781176   21459 cri.go:89] found id: "440454c3da4c0302b146f97ed6c6f0e44df0c561a5f8d848e7e81218f08ef6db"
	I1018 08:32:13.781180   21459 cri.go:89] found id: "bbb9cb4aa33f4e4e42c90d9f8b44b4c8f4c50b6a89edc5b1893c52e37b664fed"
	I1018 08:32:13.781184   21459 cri.go:89] found id: "0ad5891c05dff52f7d29bd1edd32ab0a01ccc280a8974a244ec73419bd21a831"
	I1018 08:32:13.781188   21459 cri.go:89] found id: "f6e03a69b7bc41d32daae0fea75627f3e6bab34641aba500b0deec44241fa209"
	I1018 08:32:13.781196   21459 cri.go:89] found id: "4171876174cfa4f01c139bc1155ba660392b57736128ebf7bc1dca331bbcaee4"
	I1018 08:32:13.781203   21459 cri.go:89] found id: "813e46f6ecd6f3f0ed03b73a97ae5413d8bb65920271777404da143c0e902755"
	I1018 08:32:13.781207   21459 cri.go:89] found id: "ab63780aacfa0fb8341e0e937c7631e7eaf6c63690759abd7a5b64b2e83e3368"
	I1018 08:32:13.781211   21459 cri.go:89] found id: "72deecf66bdbeb5deb3d6951223a20fdac95c3fa8a32985ef6454a42357402e1"
	I1018 08:32:13.781215   21459 cri.go:89] found id: "cd1b8704f38dd764c4b72252086f2f53b2d98dc57ff0adbe8d204888170e994c"
	I1018 08:32:13.781234   21459 cri.go:89] found id: "c08a0e9528c61fd1b79ac94a2160e7ee63bc34d3e75541f78b2b3b9028daa8e1"
	I1018 08:32:13.781245   21459 cri.go:89] found id: "c216c132bff884542e58d26afd8de4c6ca39c00b23e91a35690a66be1be95c45"
	I1018 08:32:13.781249   21459 cri.go:89] found id: "dbf5c6e8579fb377ddb314d3b43db1406eed33b898a34570445f3dbda1c63266"
	I1018 08:32:13.781254   21459 cri.go:89] found id: "7189be801872dca3adedf930d1116a7930ab711e9e199fc588f8ad5ec67c23c9"
	I1018 08:32:13.781258   21459 cri.go:89] found id: "6c971e87dacce692e9ba51b9df623358656653ca81349c910a48ee4deca9701c"
	I1018 08:32:13.781262   21459 cri.go:89] found id: "1511480aef50d5e66eab7e6f72a8a21ffb3a3ad656dc0ce5a729ee3afe26e9c7"
	I1018 08:32:13.781266   21459 cri.go:89] found id: "52adc977887b4b184292fed9d6952cb67b4fb667289dd8df966abb85c03aaa46"
	I1018 08:32:13.781276   21459 cri.go:89] found id: "4faa6d23dba1bdc8c7eba89649f47072c5b426937bf2a2b10aa4b52d39f44cf8"
	I1018 08:32:13.781283   21459 cri.go:89] found id: "717994737c9e9e736b5e73abe6513db6ce8ecf19404100a264fa9c13ee71f047"
	I1018 08:32:13.781287   21459 cri.go:89] found id: "56d69d63fccc147fc338479d722142a993f3013be2c188974a95d01e019bcb14"
	I1018 08:32:13.781293   21459 cri.go:89] found id: ""
	I1018 08:32:13.781338   21459 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 08:32:13.796974   21459 out.go:203] 
	W1018 08:32:13.798376   21459 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:32:13Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:32:13Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 08:32:13.798403   21459 out.go:285] * 
	* 
	W1018 08:32:13.803115   21459 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 08:32:13.806077   21459 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-757656 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.25s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (10.11s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-757656 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-757656 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757656 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757656 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757656 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757656 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757656 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757656 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [e61cdaa7-696d-4efc-8f94-b51daaa9e55f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [e61cdaa7-696d-4efc-8f94-b51daaa9e55f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [e61cdaa7-696d-4efc-8f94-b51daaa9e55f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003077405s
addons_test.go:967: (dbg) Run:  kubectl --context addons-757656 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-757656 ssh "cat /opt/local-path-provisioner/pvc-3bb86e8d-f47c-4433-8e20-a46488dd0d44_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-757656 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-757656 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-757656 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-757656 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (234.612067ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 08:32:18.576438   22272 out.go:360] Setting OutFile to fd 1 ...
	I1018 08:32:18.576759   22272 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:32:18.576771   22272 out.go:374] Setting ErrFile to fd 2...
	I1018 08:32:18.576777   22272 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:32:18.576991   22272 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-5897/.minikube/bin
	I1018 08:32:18.577273   22272 mustload.go:65] Loading cluster: addons-757656
	I1018 08:32:18.577643   22272 config.go:182] Loaded profile config "addons-757656": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:32:18.577661   22272 addons.go:606] checking whether the cluster is paused
	I1018 08:32:18.577765   22272 config.go:182] Loaded profile config "addons-757656": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:32:18.577780   22272 host.go:66] Checking if "addons-757656" exists ...
	I1018 08:32:18.578141   22272 cli_runner.go:164] Run: docker container inspect addons-757656 --format={{.State.Status}}
	I1018 08:32:18.596255   22272 ssh_runner.go:195] Run: systemctl --version
	I1018 08:32:18.596453   22272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:32:18.614543   22272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/addons-757656/id_rsa Username:docker}
	I1018 08:32:18.710624   22272 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 08:32:18.710710   22272 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 08:32:18.742897   22272 cri.go:89] found id: "4790d2a16058f3034a1b1e4ae855894aab7d7f3d7c610e86af4396f6d3498080"
	I1018 08:32:18.742929   22272 cri.go:89] found id: "48cc0cbf614ba386328b79cd306ce1fd90f8b4c338b8eb054421e5183efc5d4e"
	I1018 08:32:18.742934   22272 cri.go:89] found id: "c8379247a51b8242f6cb2cd6503d43ea0ed66dd9900fc8728b601695286a1d0a"
	I1018 08:32:18.742940   22272 cri.go:89] found id: "941a165621387aecf9f61fa5f0858b119aa2452338edbbe4ffbe1cff9b72292f"
	I1018 08:32:18.742944   22272 cri.go:89] found id: "14afe1f1816da18c2ce04153131d2aa122c50659a7faa8f9e40544d725a3d2c7"
	I1018 08:32:18.742950   22272 cri.go:89] found id: "440454c3da4c0302b146f97ed6c6f0e44df0c561a5f8d848e7e81218f08ef6db"
	I1018 08:32:18.742955   22272 cri.go:89] found id: "bbb9cb4aa33f4e4e42c90d9f8b44b4c8f4c50b6a89edc5b1893c52e37b664fed"
	I1018 08:32:18.742959   22272 cri.go:89] found id: "0ad5891c05dff52f7d29bd1edd32ab0a01ccc280a8974a244ec73419bd21a831"
	I1018 08:32:18.742963   22272 cri.go:89] found id: "f6e03a69b7bc41d32daae0fea75627f3e6bab34641aba500b0deec44241fa209"
	I1018 08:32:18.742970   22272 cri.go:89] found id: "4171876174cfa4f01c139bc1155ba660392b57736128ebf7bc1dca331bbcaee4"
	I1018 08:32:18.742977   22272 cri.go:89] found id: "813e46f6ecd6f3f0ed03b73a97ae5413d8bb65920271777404da143c0e902755"
	I1018 08:32:18.742979   22272 cri.go:89] found id: "ab63780aacfa0fb8341e0e937c7631e7eaf6c63690759abd7a5b64b2e83e3368"
	I1018 08:32:18.742982   22272 cri.go:89] found id: "72deecf66bdbeb5deb3d6951223a20fdac95c3fa8a32985ef6454a42357402e1"
	I1018 08:32:18.742984   22272 cri.go:89] found id: "cd1b8704f38dd764c4b72252086f2f53b2d98dc57ff0adbe8d204888170e994c"
	I1018 08:32:18.742987   22272 cri.go:89] found id: "c08a0e9528c61fd1b79ac94a2160e7ee63bc34d3e75541f78b2b3b9028daa8e1"
	I1018 08:32:18.742992   22272 cri.go:89] found id: "c216c132bff884542e58d26afd8de4c6ca39c00b23e91a35690a66be1be95c45"
	I1018 08:32:18.742997   22272 cri.go:89] found id: "dbf5c6e8579fb377ddb314d3b43db1406eed33b898a34570445f3dbda1c63266"
	I1018 08:32:18.743001   22272 cri.go:89] found id: "7189be801872dca3adedf930d1116a7930ab711e9e199fc588f8ad5ec67c23c9"
	I1018 08:32:18.743004   22272 cri.go:89] found id: "6c971e87dacce692e9ba51b9df623358656653ca81349c910a48ee4deca9701c"
	I1018 08:32:18.743006   22272 cri.go:89] found id: "1511480aef50d5e66eab7e6f72a8a21ffb3a3ad656dc0ce5a729ee3afe26e9c7"
	I1018 08:32:18.743008   22272 cri.go:89] found id: "52adc977887b4b184292fed9d6952cb67b4fb667289dd8df966abb85c03aaa46"
	I1018 08:32:18.743011   22272 cri.go:89] found id: "4faa6d23dba1bdc8c7eba89649f47072c5b426937bf2a2b10aa4b52d39f44cf8"
	I1018 08:32:18.743013   22272 cri.go:89] found id: "717994737c9e9e736b5e73abe6513db6ce8ecf19404100a264fa9c13ee71f047"
	I1018 08:32:18.743015   22272 cri.go:89] found id: "56d69d63fccc147fc338479d722142a993f3013be2c188974a95d01e019bcb14"
	I1018 08:32:18.743018   22272 cri.go:89] found id: ""
	I1018 08:32:18.743055   22272 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 08:32:18.757833   22272 out.go:203] 
	W1018 08:32:18.759390   22272 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:32:18Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:32:18Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 08:32:18.759410   22272 out.go:285] * 
	* 
	W1018 08:32:18.762365   22272 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 08:32:18.763414   22272 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-757656 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (10.11s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-bnzlc" [6bc7ead8-1876-4ced-8f8f-ca3c9e987a0e] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003863006s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-757656 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-757656 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (241.071413ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 08:32:02.120677   20099 out.go:360] Setting OutFile to fd 1 ...
	I1018 08:32:02.120991   20099 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:32:02.121003   20099 out.go:374] Setting ErrFile to fd 2...
	I1018 08:32:02.121010   20099 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:32:02.121228   20099 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-5897/.minikube/bin
	I1018 08:32:02.121522   20099 mustload.go:65] Loading cluster: addons-757656
	I1018 08:32:02.121907   20099 config.go:182] Loaded profile config "addons-757656": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:32:02.121926   20099 addons.go:606] checking whether the cluster is paused
	I1018 08:32:02.122027   20099 config.go:182] Loaded profile config "addons-757656": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:32:02.122045   20099 host.go:66] Checking if "addons-757656" exists ...
	I1018 08:32:02.122458   20099 cli_runner.go:164] Run: docker container inspect addons-757656 --format={{.State.Status}}
	I1018 08:32:02.142259   20099 ssh_runner.go:195] Run: systemctl --version
	I1018 08:32:02.142323   20099 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:32:02.160943   20099 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/addons-757656/id_rsa Username:docker}
	I1018 08:32:02.259807   20099 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 08:32:02.259895   20099 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 08:32:02.289188   20099 cri.go:89] found id: "4790d2a16058f3034a1b1e4ae855894aab7d7f3d7c610e86af4396f6d3498080"
	I1018 08:32:02.289223   20099 cri.go:89] found id: "48cc0cbf614ba386328b79cd306ce1fd90f8b4c338b8eb054421e5183efc5d4e"
	I1018 08:32:02.289229   20099 cri.go:89] found id: "c8379247a51b8242f6cb2cd6503d43ea0ed66dd9900fc8728b601695286a1d0a"
	I1018 08:32:02.289232   20099 cri.go:89] found id: "941a165621387aecf9f61fa5f0858b119aa2452338edbbe4ffbe1cff9b72292f"
	I1018 08:32:02.289235   20099 cri.go:89] found id: "14afe1f1816da18c2ce04153131d2aa122c50659a7faa8f9e40544d725a3d2c7"
	I1018 08:32:02.289239   20099 cri.go:89] found id: "440454c3da4c0302b146f97ed6c6f0e44df0c561a5f8d848e7e81218f08ef6db"
	I1018 08:32:02.289243   20099 cri.go:89] found id: "bbb9cb4aa33f4e4e42c90d9f8b44b4c8f4c50b6a89edc5b1893c52e37b664fed"
	I1018 08:32:02.289247   20099 cri.go:89] found id: "0ad5891c05dff52f7d29bd1edd32ab0a01ccc280a8974a244ec73419bd21a831"
	I1018 08:32:02.289251   20099 cri.go:89] found id: "f6e03a69b7bc41d32daae0fea75627f3e6bab34641aba500b0deec44241fa209"
	I1018 08:32:02.289274   20099 cri.go:89] found id: "4171876174cfa4f01c139bc1155ba660392b57736128ebf7bc1dca331bbcaee4"
	I1018 08:32:02.289283   20099 cri.go:89] found id: "813e46f6ecd6f3f0ed03b73a97ae5413d8bb65920271777404da143c0e902755"
	I1018 08:32:02.289287   20099 cri.go:89] found id: "ab63780aacfa0fb8341e0e937c7631e7eaf6c63690759abd7a5b64b2e83e3368"
	I1018 08:32:02.289295   20099 cri.go:89] found id: "72deecf66bdbeb5deb3d6951223a20fdac95c3fa8a32985ef6454a42357402e1"
	I1018 08:32:02.289300   20099 cri.go:89] found id: "cd1b8704f38dd764c4b72252086f2f53b2d98dc57ff0adbe8d204888170e994c"
	I1018 08:32:02.289307   20099 cri.go:89] found id: "c08a0e9528c61fd1b79ac94a2160e7ee63bc34d3e75541f78b2b3b9028daa8e1"
	I1018 08:32:02.289318   20099 cri.go:89] found id: "c216c132bff884542e58d26afd8de4c6ca39c00b23e91a35690a66be1be95c45"
	I1018 08:32:02.289325   20099 cri.go:89] found id: "dbf5c6e8579fb377ddb314d3b43db1406eed33b898a34570445f3dbda1c63266"
	I1018 08:32:02.289331   20099 cri.go:89] found id: "7189be801872dca3adedf930d1116a7930ab711e9e199fc588f8ad5ec67c23c9"
	I1018 08:32:02.289334   20099 cri.go:89] found id: "6c971e87dacce692e9ba51b9df623358656653ca81349c910a48ee4deca9701c"
	I1018 08:32:02.289337   20099 cri.go:89] found id: "1511480aef50d5e66eab7e6f72a8a21ffb3a3ad656dc0ce5a729ee3afe26e9c7"
	I1018 08:32:02.289351   20099 cri.go:89] found id: "52adc977887b4b184292fed9d6952cb67b4fb667289dd8df966abb85c03aaa46"
	I1018 08:32:02.289356   20099 cri.go:89] found id: "4faa6d23dba1bdc8c7eba89649f47072c5b426937bf2a2b10aa4b52d39f44cf8"
	I1018 08:32:02.289360   20099 cri.go:89] found id: "717994737c9e9e736b5e73abe6513db6ce8ecf19404100a264fa9c13ee71f047"
	I1018 08:32:02.289364   20099 cri.go:89] found id: "56d69d63fccc147fc338479d722142a993f3013be2c188974a95d01e019bcb14"
	I1018 08:32:02.289369   20099 cri.go:89] found id: ""
	I1018 08:32:02.289422   20099 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 08:32:02.303632   20099 out.go:203] 
	W1018 08:32:02.304727   20099 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:32:02Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:32:02Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 08:32:02.304751   20099 out.go:285] * 
	* 
	W1018 08:32:02.307661   20099 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 08:32:02.308865   20099 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-757656 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.25s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.24s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-v82m4" [7f3fb925-c044-4371-a54c-2005a1314b1d] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003379716s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-757656 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-757656 addons disable yakd --alsologtostderr -v=1: exit status 11 (239.146625ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 08:32:08.365321   20400 out.go:360] Setting OutFile to fd 1 ...
	I1018 08:32:08.365486   20400 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:32:08.365500   20400 out.go:374] Setting ErrFile to fd 2...
	I1018 08:32:08.365507   20400 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:32:08.365817   20400 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-5897/.minikube/bin
	I1018 08:32:08.366121   20400 mustload.go:65] Loading cluster: addons-757656
	I1018 08:32:08.366502   20400 config.go:182] Loaded profile config "addons-757656": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:32:08.366518   20400 addons.go:606] checking whether the cluster is paused
	I1018 08:32:08.366636   20400 config.go:182] Loaded profile config "addons-757656": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:32:08.366652   20400 host.go:66] Checking if "addons-757656" exists ...
	I1018 08:32:08.367061   20400 cli_runner.go:164] Run: docker container inspect addons-757656 --format={{.State.Status}}
	I1018 08:32:08.387898   20400 ssh_runner.go:195] Run: systemctl --version
	I1018 08:32:08.387952   20400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:32:08.406206   20400 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/addons-757656/id_rsa Username:docker}
	I1018 08:32:08.503694   20400 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 08:32:08.503752   20400 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 08:32:08.533014   20400 cri.go:89] found id: "4790d2a16058f3034a1b1e4ae855894aab7d7f3d7c610e86af4396f6d3498080"
	I1018 08:32:08.533033   20400 cri.go:89] found id: "48cc0cbf614ba386328b79cd306ce1fd90f8b4c338b8eb054421e5183efc5d4e"
	I1018 08:32:08.533036   20400 cri.go:89] found id: "c8379247a51b8242f6cb2cd6503d43ea0ed66dd9900fc8728b601695286a1d0a"
	I1018 08:32:08.533041   20400 cri.go:89] found id: "941a165621387aecf9f61fa5f0858b119aa2452338edbbe4ffbe1cff9b72292f"
	I1018 08:32:08.533044   20400 cri.go:89] found id: "14afe1f1816da18c2ce04153131d2aa122c50659a7faa8f9e40544d725a3d2c7"
	I1018 08:32:08.533047   20400 cri.go:89] found id: "440454c3da4c0302b146f97ed6c6f0e44df0c561a5f8d848e7e81218f08ef6db"
	I1018 08:32:08.533049   20400 cri.go:89] found id: "bbb9cb4aa33f4e4e42c90d9f8b44b4c8f4c50b6a89edc5b1893c52e37b664fed"
	I1018 08:32:08.533052   20400 cri.go:89] found id: "0ad5891c05dff52f7d29bd1edd32ab0a01ccc280a8974a244ec73419bd21a831"
	I1018 08:32:08.533054   20400 cri.go:89] found id: "f6e03a69b7bc41d32daae0fea75627f3e6bab34641aba500b0deec44241fa209"
	I1018 08:32:08.533070   20400 cri.go:89] found id: "4171876174cfa4f01c139bc1155ba660392b57736128ebf7bc1dca331bbcaee4"
	I1018 08:32:08.533074   20400 cri.go:89] found id: "813e46f6ecd6f3f0ed03b73a97ae5413d8bb65920271777404da143c0e902755"
	I1018 08:32:08.533076   20400 cri.go:89] found id: "ab63780aacfa0fb8341e0e937c7631e7eaf6c63690759abd7a5b64b2e83e3368"
	I1018 08:32:08.533079   20400 cri.go:89] found id: "72deecf66bdbeb5deb3d6951223a20fdac95c3fa8a32985ef6454a42357402e1"
	I1018 08:32:08.533081   20400 cri.go:89] found id: "cd1b8704f38dd764c4b72252086f2f53b2d98dc57ff0adbe8d204888170e994c"
	I1018 08:32:08.533084   20400 cri.go:89] found id: "c08a0e9528c61fd1b79ac94a2160e7ee63bc34d3e75541f78b2b3b9028daa8e1"
	I1018 08:32:08.533095   20400 cri.go:89] found id: "c216c132bff884542e58d26afd8de4c6ca39c00b23e91a35690a66be1be95c45"
	I1018 08:32:08.533102   20400 cri.go:89] found id: "dbf5c6e8579fb377ddb314d3b43db1406eed33b898a34570445f3dbda1c63266"
	I1018 08:32:08.533110   20400 cri.go:89] found id: "7189be801872dca3adedf930d1116a7930ab711e9e199fc588f8ad5ec67c23c9"
	I1018 08:32:08.533114   20400 cri.go:89] found id: "6c971e87dacce692e9ba51b9df623358656653ca81349c910a48ee4deca9701c"
	I1018 08:32:08.533116   20400 cri.go:89] found id: "1511480aef50d5e66eab7e6f72a8a21ffb3a3ad656dc0ce5a729ee3afe26e9c7"
	I1018 08:32:08.533121   20400 cri.go:89] found id: "52adc977887b4b184292fed9d6952cb67b4fb667289dd8df966abb85c03aaa46"
	I1018 08:32:08.533123   20400 cri.go:89] found id: "4faa6d23dba1bdc8c7eba89649f47072c5b426937bf2a2b10aa4b52d39f44cf8"
	I1018 08:32:08.533126   20400 cri.go:89] found id: "717994737c9e9e736b5e73abe6513db6ce8ecf19404100a264fa9c13ee71f047"
	I1018 08:32:08.533128   20400 cri.go:89] found id: "56d69d63fccc147fc338479d722142a993f3013be2c188974a95d01e019bcb14"
	I1018 08:32:08.533130   20400 cri.go:89] found id: ""
	I1018 08:32:08.533171   20400 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 08:32:08.547513   20400 out.go:203] 
	W1018 08:32:08.548874   20400 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:32:08Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:32:08Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 08:32:08.548891   20400 out.go:285] * 
	* 
	W1018 08:32:08.552021   20400 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 08:32:08.553393   20400 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-757656 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.24s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.28s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-v82lt" [24a1cd58-553e-4bee-beaf-75f0e39eeb29] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.041197369s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-757656 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-757656 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (234.580614ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 08:32:08.462821   20445 out.go:360] Setting OutFile to fd 1 ...
	I1018 08:32:08.463118   20445 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:32:08.463129   20445 out.go:374] Setting ErrFile to fd 2...
	I1018 08:32:08.463133   20445 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:32:08.463402   20445 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-5897/.minikube/bin
	I1018 08:32:08.463677   20445 mustload.go:65] Loading cluster: addons-757656
	I1018 08:32:08.464002   20445 config.go:182] Loaded profile config "addons-757656": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:32:08.464017   20445 addons.go:606] checking whether the cluster is paused
	I1018 08:32:08.464092   20445 config.go:182] Loaded profile config "addons-757656": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:32:08.464103   20445 host.go:66] Checking if "addons-757656" exists ...
	I1018 08:32:08.464479   20445 cli_runner.go:164] Run: docker container inspect addons-757656 --format={{.State.Status}}
	I1018 08:32:08.482225   20445 ssh_runner.go:195] Run: systemctl --version
	I1018 08:32:08.482278   20445 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757656
	I1018 08:32:08.500584   20445 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/addons-757656/id_rsa Username:docker}
	I1018 08:32:08.596828   20445 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 08:32:08.596918   20445 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 08:32:08.627927   20445 cri.go:89] found id: "4790d2a16058f3034a1b1e4ae855894aab7d7f3d7c610e86af4396f6d3498080"
	I1018 08:32:08.627946   20445 cri.go:89] found id: "48cc0cbf614ba386328b79cd306ce1fd90f8b4c338b8eb054421e5183efc5d4e"
	I1018 08:32:08.627950   20445 cri.go:89] found id: "c8379247a51b8242f6cb2cd6503d43ea0ed66dd9900fc8728b601695286a1d0a"
	I1018 08:32:08.627953   20445 cri.go:89] found id: "941a165621387aecf9f61fa5f0858b119aa2452338edbbe4ffbe1cff9b72292f"
	I1018 08:32:08.627956   20445 cri.go:89] found id: "14afe1f1816da18c2ce04153131d2aa122c50659a7faa8f9e40544d725a3d2c7"
	I1018 08:32:08.627959   20445 cri.go:89] found id: "440454c3da4c0302b146f97ed6c6f0e44df0c561a5f8d848e7e81218f08ef6db"
	I1018 08:32:08.627962   20445 cri.go:89] found id: "bbb9cb4aa33f4e4e42c90d9f8b44b4c8f4c50b6a89edc5b1893c52e37b664fed"
	I1018 08:32:08.627964   20445 cri.go:89] found id: "0ad5891c05dff52f7d29bd1edd32ab0a01ccc280a8974a244ec73419bd21a831"
	I1018 08:32:08.627967   20445 cri.go:89] found id: "f6e03a69b7bc41d32daae0fea75627f3e6bab34641aba500b0deec44241fa209"
	I1018 08:32:08.627979   20445 cri.go:89] found id: "4171876174cfa4f01c139bc1155ba660392b57736128ebf7bc1dca331bbcaee4"
	I1018 08:32:08.627982   20445 cri.go:89] found id: "813e46f6ecd6f3f0ed03b73a97ae5413d8bb65920271777404da143c0e902755"
	I1018 08:32:08.627990   20445 cri.go:89] found id: "ab63780aacfa0fb8341e0e937c7631e7eaf6c63690759abd7a5b64b2e83e3368"
	I1018 08:32:08.627993   20445 cri.go:89] found id: "72deecf66bdbeb5deb3d6951223a20fdac95c3fa8a32985ef6454a42357402e1"
	I1018 08:32:08.627995   20445 cri.go:89] found id: "cd1b8704f38dd764c4b72252086f2f53b2d98dc57ff0adbe8d204888170e994c"
	I1018 08:32:08.627998   20445 cri.go:89] found id: "c08a0e9528c61fd1b79ac94a2160e7ee63bc34d3e75541f78b2b3b9028daa8e1"
	I1018 08:32:08.628004   20445 cri.go:89] found id: "c216c132bff884542e58d26afd8de4c6ca39c00b23e91a35690a66be1be95c45"
	I1018 08:32:08.628009   20445 cri.go:89] found id: "dbf5c6e8579fb377ddb314d3b43db1406eed33b898a34570445f3dbda1c63266"
	I1018 08:32:08.628014   20445 cri.go:89] found id: "7189be801872dca3adedf930d1116a7930ab711e9e199fc588f8ad5ec67c23c9"
	I1018 08:32:08.628016   20445 cri.go:89] found id: "6c971e87dacce692e9ba51b9df623358656653ca81349c910a48ee4deca9701c"
	I1018 08:32:08.628019   20445 cri.go:89] found id: "1511480aef50d5e66eab7e6f72a8a21ffb3a3ad656dc0ce5a729ee3afe26e9c7"
	I1018 08:32:08.628021   20445 cri.go:89] found id: "52adc977887b4b184292fed9d6952cb67b4fb667289dd8df966abb85c03aaa46"
	I1018 08:32:08.628023   20445 cri.go:89] found id: "4faa6d23dba1bdc8c7eba89649f47072c5b426937bf2a2b10aa4b52d39f44cf8"
	I1018 08:32:08.628026   20445 cri.go:89] found id: "717994737c9e9e736b5e73abe6513db6ce8ecf19404100a264fa9c13ee71f047"
	I1018 08:32:08.628029   20445 cri.go:89] found id: "56d69d63fccc147fc338479d722142a993f3013be2c188974a95d01e019bcb14"
	I1018 08:32:08.628031   20445 cri.go:89] found id: ""
	I1018 08:32:08.628067   20445 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 08:32:08.643110   20445 out.go:203] 
	W1018 08:32:08.644608   20445 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:32:08Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:32:08Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 08:32:08.644637   20445 out.go:285] * 
	* 
	W1018 08:32:08.647664   20445 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 08:32:08.649197   20445 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-757656 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (6.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (602.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-897534 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-897534 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-xw6rz" [74e87598-8125-4cb6-a207-f83852f22ae6] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-897534 -n functional-897534
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-10-18 08:47:57.848525337 +0000 UTC m=+1125.176157111
functional_test.go:1645: (dbg) Run:  kubectl --context functional-897534 describe po hello-node-connect-7d85dfc575-xw6rz -n default
functional_test.go:1645: (dbg) kubectl --context functional-897534 describe po hello-node-connect-7d85dfc575-xw6rz -n default:
Name:             hello-node-connect-7d85dfc575-xw6rz
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-897534/192.168.49.2
Start Time:       Sat, 18 Oct 2025 08:37:57 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6q9zr (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-6q9zr:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-xw6rz to functional-897534
Normal   Pulling    7m9s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m9s (x5 over 9m58s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m9s (x5 over 9m58s)    kubelet            Error: ErrImagePull
Warning  Failed     4m51s (x20 over 9m57s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m37s (x21 over 9m57s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-897534 logs hello-node-connect-7d85dfc575-xw6rz -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-897534 logs hello-node-connect-7d85dfc575-xw6rz -n default: exit status 1 (62.486143ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-xw6rz" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-897534 logs hello-node-connect-7d85dfc575-xw6rz -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-897534 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-xw6rz
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-897534/192.168.49.2
Start Time:       Sat, 18 Oct 2025 08:37:57 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6q9zr (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-6q9zr:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-xw6rz to functional-897534
Normal   Pulling    7m10s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m10s (x5 over 9m59s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m10s (x5 over 9m59s)   kubelet            Error: ErrImagePull
Warning  Failed     4m52s (x20 over 9m58s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m38s (x21 over 9m58s)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-897534 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-897534 logs -l app=hello-node-connect: exit status 1 (62.805072ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-xw6rz" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-897534 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-897534 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.104.9.5
IPs:                      10.104.9.5
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30976/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-897534
helpers_test.go:243: (dbg) docker inspect functional-897534:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "53db954e69650187326cd30378601ae162802735b80ab620a385eafbe734f105",
	        "Created": "2025-10-18T08:35:46.882961055Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 33665,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T08:35:46.924149274Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/53db954e69650187326cd30378601ae162802735b80ab620a385eafbe734f105/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/53db954e69650187326cd30378601ae162802735b80ab620a385eafbe734f105/hostname",
	        "HostsPath": "/var/lib/docker/containers/53db954e69650187326cd30378601ae162802735b80ab620a385eafbe734f105/hosts",
	        "LogPath": "/var/lib/docker/containers/53db954e69650187326cd30378601ae162802735b80ab620a385eafbe734f105/53db954e69650187326cd30378601ae162802735b80ab620a385eafbe734f105-json.log",
	        "Name": "/functional-897534",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-897534:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-897534",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "53db954e69650187326cd30378601ae162802735b80ab620a385eafbe734f105",
	                "LowerDir": "/var/lib/docker/overlay2/853aecd8ea9c4b29bf5dc3cbc5b6b0f1a7996805991d6dce9249c05d62075f2f-init/diff:/var/lib/docker/overlay2/76f783f469ac4c930bc111d7df4bd2b3a57bdcd762971c7ce0ba7a7b959771a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/853aecd8ea9c4b29bf5dc3cbc5b6b0f1a7996805991d6dce9249c05d62075f2f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/853aecd8ea9c4b29bf5dc3cbc5b6b0f1a7996805991d6dce9249c05d62075f2f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/853aecd8ea9c4b29bf5dc3cbc5b6b0f1a7996805991d6dce9249c05d62075f2f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-897534",
	                "Source": "/var/lib/docker/volumes/functional-897534/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-897534",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-897534",
	                "name.minikube.sigs.k8s.io": "functional-897534",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "faa39b699ce6a2ac0bde84af40850ec14305f072de45511407bb4c97646b2343",
	            "SandboxKey": "/var/run/docker/netns/faa39b699ce6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-897534": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "be:1f:14:c7:e7:96",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b6e79bf870a4218aeb20c0a6318023625f57c4c2fe4602a31227a007bb1f3e05",
	                    "EndpointID": "5cd07320385dbb015418cd3b2a29ae6b5c52a239a52ad850e0928db469476e11",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-897534",
	                        "53db954e6965"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-897534 -n functional-897534
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-897534 logs -n 25: (1.272377593s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                  ARGS                                                  │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-897534 ssh findmnt -T /mount3                                                               │ functional-897534 │ jenkins │ v1.37.0 │ 18 Oct 25 08:37 UTC │ 18 Oct 25 08:37 UTC │
	│ ssh            │ functional-897534 ssh echo hello                                                                       │ functional-897534 │ jenkins │ v1.37.0 │ 18 Oct 25 08:37 UTC │ 18 Oct 25 08:37 UTC │
	│ mount          │ -p functional-897534 --kill=true                                                                       │ functional-897534 │ jenkins │ v1.37.0 │ 18 Oct 25 08:37 UTC │                     │
	│ tunnel         │ functional-897534 tunnel --alsologtostderr                                                             │ functional-897534 │ jenkins │ v1.37.0 │ 18 Oct 25 08:37 UTC │                     │
	│ tunnel         │ functional-897534 tunnel --alsologtostderr                                                             │ functional-897534 │ jenkins │ v1.37.0 │ 18 Oct 25 08:37 UTC │                     │
	│ ssh            │ functional-897534 ssh cat /etc/hostname                                                                │ functional-897534 │ jenkins │ v1.37.0 │ 18 Oct 25 08:37 UTC │ 18 Oct 25 08:37 UTC │
	│ addons         │ functional-897534 addons list                                                                          │ functional-897534 │ jenkins │ v1.37.0 │ 18 Oct 25 08:37 UTC │ 18 Oct 25 08:37 UTC │
	│ addons         │ functional-897534 addons list -o json                                                                  │ functional-897534 │ jenkins │ v1.37.0 │ 18 Oct 25 08:37 UTC │ 18 Oct 25 08:37 UTC │
	│ tunnel         │ functional-897534 tunnel --alsologtostderr                                                             │ functional-897534 │ jenkins │ v1.37.0 │ 18 Oct 25 08:37 UTC │                     │
	│ ssh            │ functional-897534 ssh sudo cat /etc/test/nested/copy/9394/hosts                                        │ functional-897534 │ jenkins │ v1.37.0 │ 18 Oct 25 08:38 UTC │ 18 Oct 25 08:38 UTC │
	│ update-context │ functional-897534 update-context --alsologtostderr -v=2                                                │ functional-897534 │ jenkins │ v1.37.0 │ 18 Oct 25 08:38 UTC │ 18 Oct 25 08:38 UTC │
	│ update-context │ functional-897534 update-context --alsologtostderr -v=2                                                │ functional-897534 │ jenkins │ v1.37.0 │ 18 Oct 25 08:38 UTC │ 18 Oct 25 08:38 UTC │
	│ update-context │ functional-897534 update-context --alsologtostderr -v=2                                                │ functional-897534 │ jenkins │ v1.37.0 │ 18 Oct 25 08:38 UTC │ 18 Oct 25 08:38 UTC │
	│ image          │ functional-897534 image ls --format short --alsologtostderr                                            │ functional-897534 │ jenkins │ v1.37.0 │ 18 Oct 25 08:38 UTC │ 18 Oct 25 08:38 UTC │
	│ ssh            │ functional-897534 ssh pgrep buildkitd                                                                  │ functional-897534 │ jenkins │ v1.37.0 │ 18 Oct 25 08:38 UTC │                     │
	│ image          │ functional-897534 image build -t localhost/my-image:functional-897534 testdata/build --alsologtostderr │ functional-897534 │ jenkins │ v1.37.0 │ 18 Oct 25 08:38 UTC │ 18 Oct 25 08:38 UTC │
	│ image          │ functional-897534 image ls                                                                             │ functional-897534 │ jenkins │ v1.37.0 │ 18 Oct 25 08:38 UTC │ 18 Oct 25 08:38 UTC │
	│ image          │ functional-897534 image ls --format yaml --alsologtostderr                                             │ functional-897534 │ jenkins │ v1.37.0 │ 18 Oct 25 08:38 UTC │ 18 Oct 25 08:38 UTC │
	│ image          │ functional-897534 image ls --format json --alsologtostderr                                             │ functional-897534 │ jenkins │ v1.37.0 │ 18 Oct 25 08:38 UTC │ 18 Oct 25 08:38 UTC │
	│ image          │ functional-897534 image ls --format table --alsologtostderr                                            │ functional-897534 │ jenkins │ v1.37.0 │ 18 Oct 25 08:38 UTC │ 18 Oct 25 08:38 UTC │
	│ service        │ functional-897534 service list                                                                         │ functional-897534 │ jenkins │ v1.37.0 │ 18 Oct 25 08:47 UTC │ 18 Oct 25 08:47 UTC │
	│ service        │ functional-897534 service list -o json                                                                 │ functional-897534 │ jenkins │ v1.37.0 │ 18 Oct 25 08:47 UTC │ 18 Oct 25 08:47 UTC │
	│ service        │ functional-897534 service --namespace=default --https --url hello-node                                 │ functional-897534 │ jenkins │ v1.37.0 │ 18 Oct 25 08:47 UTC │                     │
	│ service        │ functional-897534 service hello-node --url --format={{.IP}}                                            │ functional-897534 │ jenkins │ v1.37.0 │ 18 Oct 25 08:47 UTC │                     │
	│ service        │ functional-897534 service hello-node --url                                                             │ functional-897534 │ jenkins │ v1.37.0 │ 18 Oct 25 08:47 UTC │                     │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 08:37:49
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 08:37:49.582893   45003 out.go:360] Setting OutFile to fd 1 ...
	I1018 08:37:49.583012   45003 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:37:49.583025   45003 out.go:374] Setting ErrFile to fd 2...
	I1018 08:37:49.583032   45003 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:37:49.583367   45003 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-5897/.minikube/bin
	I1018 08:37:49.583835   45003 out.go:368] Setting JSON to false
	I1018 08:37:49.584818   45003 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1218,"bootTime":1760775452,"procs":226,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 08:37:49.584915   45003 start.go:141] virtualization: kvm guest
	I1018 08:37:49.586603   45003 out.go:179] * [functional-897534] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1018 08:37:49.588275   45003 out.go:179]   - MINIKUBE_LOCATION=21767
	I1018 08:37:49.588274   45003 notify.go:220] Checking for updates...
	I1018 08:37:49.590562   45003 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 08:37:49.591722   45003 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-5897/kubeconfig
	I1018 08:37:49.592855   45003 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-5897/.minikube
	I1018 08:37:49.593886   45003 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 08:37:49.594998   45003 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 08:37:49.597293   45003 config.go:182] Loaded profile config "functional-897534": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:37:49.597834   45003 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 08:37:49.623432   45003 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 08:37:49.623558   45003 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 08:37:49.689627   45003 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-18 08:37:49.678250805 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 08:37:49.689717   45003 docker.go:318] overlay module found
	I1018 08:37:49.691336   45003 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1018 08:37:49.692462   45003 start.go:305] selected driver: docker
	I1018 08:37:49.692478   45003 start.go:925] validating driver "docker" against &{Name:functional-897534 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-897534 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 08:37:49.692598   45003 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 08:37:49.694691   45003 out.go:203] 
	W1018 08:37:49.695837   45003 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1018 08:37:49.696930   45003 out.go:203] 
	
	
	==> CRI-O <==
	Oct 18 08:38:09 functional-897534 crio[3547]: time="2025-10-18T08:38:09.464712764Z" level=info msg="Trying to access \"docker.io/library/mysql:5.7\""
	Oct 18 08:38:14 functional-897534 crio[3547]: time="2025-10-18T08:38:14.894671778Z" level=info msg="Pulled image: docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da" id=ce4c79ac-5c07-4b73-8fa5-99600ca2eaee name=/runtime.v1.ImageService/PullImage
	Oct 18 08:38:14 functional-897534 crio[3547]: time="2025-10-18T08:38:14.895387308Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=eff8ff8e-58eb-437c-9057-ab6a2822a2e5 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 08:38:14 functional-897534 crio[3547]: time="2025-10-18T08:38:14.897057913Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=f559e1f0-d575-4779-b0b3-3ef94079e711 name=/runtime.v1.ImageService/PullImage
	Oct 18 08:38:14 functional-897534 crio[3547]: time="2025-10-18T08:38:14.897520696Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=4b419744-d528-4612-af97-77f0a1dd8480 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 08:38:14 functional-897534 crio[3547]: time="2025-10-18T08:38:14.902133521Z" level=info msg="Creating container: default/mysql-5bb876957f-g46jc/mysql" id=5066e084-f47b-4ff2-871d-2d2f76c32a5c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 08:38:14 functional-897534 crio[3547]: time="2025-10-18T08:38:14.903551444Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 08:38:14 functional-897534 crio[3547]: time="2025-10-18T08:38:14.90909012Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 08:38:14 functional-897534 crio[3547]: time="2025-10-18T08:38:14.909911921Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 08:38:14 functional-897534 crio[3547]: time="2025-10-18T08:38:14.942958051Z" level=info msg="Created container 3693a359652b842b89ab21d925c185234f557b5b62f3d64daf8e926c4f9a6c37: default/mysql-5bb876957f-g46jc/mysql" id=5066e084-f47b-4ff2-871d-2d2f76c32a5c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 08:38:14 functional-897534 crio[3547]: time="2025-10-18T08:38:14.943651107Z" level=info msg="Starting container: 3693a359652b842b89ab21d925c185234f557b5b62f3d64daf8e926c4f9a6c37" id=b9ef01e0-93a5-4ce2-8872-8a9408af601a name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 08:38:14 functional-897534 crio[3547]: time="2025-10-18T08:38:14.945593294Z" level=info msg="Started container" PID=7299 containerID=3693a359652b842b89ab21d925c185234f557b5b62f3d64daf8e926c4f9a6c37 description=default/mysql-5bb876957f-g46jc/mysql id=b9ef01e0-93a5-4ce2-8872-8a9408af601a name=/runtime.v1.RuntimeService/StartContainer sandboxID=5e51ffb5ecea0fb1eefcdcfec681a4617ed93f6bfdb27926a5e6a436366a929c
	Oct 18 08:38:29 functional-897534 crio[3547]: time="2025-10-18T08:38:29.937176341Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=a98fc505-04a1-45f9-9f0a-ff0c7b890a23 name=/runtime.v1.ImageService/PullImage
	Oct 18 08:38:40 functional-897534 crio[3547]: time="2025-10-18T08:38:40.937495345Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=df1827bf-5954-4683-b398-eb7e7d12ce33 name=/runtime.v1.ImageService/PullImage
	Oct 18 08:38:54 functional-897534 crio[3547]: time="2025-10-18T08:38:54.214957042Z" level=info msg="Stopping pod sandbox: e46becc48649675c329efbac4bc0dd2b0d616f1e5dff648056c1dd839d271dc8" id=c20bc8fe-225e-47dc-a015-21149e3cf505 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 18 08:38:54 functional-897534 crio[3547]: time="2025-10-18T08:38:54.215014678Z" level=info msg="Stopped pod sandbox (already stopped): e46becc48649675c329efbac4bc0dd2b0d616f1e5dff648056c1dd839d271dc8" id=c20bc8fe-225e-47dc-a015-21149e3cf505 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 18 08:38:54 functional-897534 crio[3547]: time="2025-10-18T08:38:54.215311024Z" level=info msg="Removing pod sandbox: e46becc48649675c329efbac4bc0dd2b0d616f1e5dff648056c1dd839d271dc8" id=61894630-555a-4886-9b41-148a220d3ef8 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 18 08:38:54 functional-897534 crio[3547]: time="2025-10-18T08:38:54.218503236Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 18 08:38:54 functional-897534 crio[3547]: time="2025-10-18T08:38:54.218576543Z" level=info msg="Removed pod sandbox: e46becc48649675c329efbac4bc0dd2b0d616f1e5dff648056c1dd839d271dc8" id=61894630-555a-4886-9b41-148a220d3ef8 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 18 08:39:20 functional-897534 crio[3547]: time="2025-10-18T08:39:20.937082292Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=96e79a85-7a8f-4652-8532-934f5e59bd67 name=/runtime.v1.ImageService/PullImage
	Oct 18 08:39:21 functional-897534 crio[3547]: time="2025-10-18T08:39:21.937157482Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=e8ba460f-b75a-44ce-92b9-84e7117f3181 name=/runtime.v1.ImageService/PullImage
	Oct 18 08:40:43 functional-897534 crio[3547]: time="2025-10-18T08:40:43.937500587Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=33e0d172-b979-4fc9-a64f-58d9880bd845 name=/runtime.v1.ImageService/PullImage
	Oct 18 08:40:48 functional-897534 crio[3547]: time="2025-10-18T08:40:48.936888326Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=10271f13-f92d-4164-9060-560c576ceed6 name=/runtime.v1.ImageService/PullImage
	Oct 18 08:43:27 functional-897534 crio[3547]: time="2025-10-18T08:43:27.937277373Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=0134f844-a80d-4c59-9acb-f150a4bbc948 name=/runtime.v1.ImageService/PullImage
	Oct 18 08:43:35 functional-897534 crio[3547]: time="2025-10-18T08:43:35.937400686Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=aa70e8a0-5c5e-4192-97a5-53c25135ef48 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	3693a359652b8       docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da                  9 minutes ago       Running             mysql                       0                   5e51ffb5ecea0       mysql-5bb876957f-g46jc                       default
	8a43b107085f6       docker.io/library/nginx@sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115                  9 minutes ago       Running             myfrontend                  0                   7487f28cc3541       sp-pod                                       default
	d75c3ebe2e878       docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e                  9 minutes ago       Running             nginx                       0                   d22e35989605b       nginx-svc                                    default
	de6669af8fd6b       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   10 minutes ago      Running             dashboard-metrics-scraper   0                   8d57f2eadcad3       dashboard-metrics-scraper-77bf4d6c4c-tbswn   kubernetes-dashboard
	a1765dded385e       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029         10 minutes ago      Running             kubernetes-dashboard        0                   6f521b23f070b       kubernetes-dashboard-855c9754f9-8xm88        kubernetes-dashboard
	689d73421cdb9       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998              10 minutes ago      Exited              mount-munger                0                   975edbc7ad163       busybox-mount                                default
	40ff377338a79       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Running             storage-provisioner         2                   e9d423567bb60       storage-provisioner                          kube-system
	5c61ae7c4c419       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                 10 minutes ago      Running             kube-apiserver              0                   fb4e80623432a       kube-apiserver-functional-897534             kube-system
	ace7bfbc78ae5       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 10 minutes ago      Running             etcd                        1                   298b145f89d5b       etcd-functional-897534                       kube-system
	5eddaca08ba16       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 10 minutes ago      Running             kube-controller-manager     1                   716d0e7cbd3c3       kube-controller-manager-functional-897534    kube-system
	75ceb70439011       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Running             coredns                     1                   8effce9d5c417       coredns-66bc5c9577-jgqt7                     kube-system
	ecb91ca8b66af       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         1                   e9d423567bb60       storage-provisioner                          kube-system
	9fd6a2134e403       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 11 minutes ago      Running             kube-proxy                  1                   a0bbf48d4e95a       kube-proxy-9ww8b                             kube-system
	312c48bd2093d       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Running             kindnet-cni                 1                   8f99597a23e72       kindnet-9n8vd                                kube-system
	0c1c4d4564e4d       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 11 minutes ago      Running             kube-scheduler              1                   1657563b1b2be       kube-scheduler-functional-897534             kube-system
	bdb6b3365cdb3       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Exited              coredns                     0                   8effce9d5c417       coredns-66bc5c9577-jgqt7                     kube-system
	0ac92f8a56d9e       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 11 minutes ago      Exited              kube-proxy                  0                   a0bbf48d4e95a       kube-proxy-9ww8b                             kube-system
	7c150a4d2a6ab       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Exited              kindnet-cni                 0                   8f99597a23e72       kindnet-9n8vd                                kube-system
	f42ef6dbb0b17       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 12 minutes ago      Exited              kube-scheduler              0                   1657563b1b2be       kube-scheduler-functional-897534             kube-system
	5ce68f4a45c8c       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 12 minutes ago      Exited              kube-controller-manager     0                   716d0e7cbd3c3       kube-controller-manager-functional-897534    kube-system
	ca9b31ad068cd       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 12 minutes ago      Exited              etcd                        0                   298b145f89d5b       etcd-functional-897534                       kube-system
	
	
	==> coredns [75ceb70439011a6e37bd3299ee99ba32a4f1ebd1faa6bb3f9e73595ab393f2d6] <==
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53214 - 53991 "HINFO IN 8689676487471099202.9103090746332585252. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.469843687s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [bdb6b3365cdb30937114a3dc05fa0ca8acc3de2bc0f73dd5ea61a535aef5805c] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43798 - 34409 "HINFO IN 7997764260518047471.7628803681440897473. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.471430161s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-897534
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-897534
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a39cecdc22b5fb611b15c7501c7459c3b4d2820
	                    minikube.k8s.io/name=functional-897534
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T08_35_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 08:35:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-897534
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 08:47:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 08:46:35 +0000   Sat, 18 Oct 2025 08:35:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 08:46:35 +0000   Sat, 18 Oct 2025 08:35:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 08:46:35 +0000   Sat, 18 Oct 2025 08:35:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 08:46:35 +0000   Sat, 18 Oct 2025 08:36:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-897534
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                5046591c-61bc-47d8-abaa-c2e4a26fe724
	  Boot ID:                    e8d7ef1f-87bb-488c-8381-e18fe85b484f
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-wvlqc                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-xw6rz           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-g46jc                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     9m50s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m52s
	  kube-system                 coredns-66bc5c9577-jgqt7                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-functional-897534                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kindnet-9n8vd                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-functional-897534              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-897534     200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-9ww8b                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-897534              100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-tbswn    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-8xm88         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node functional-897534 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node functional-897534 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node functional-897534 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           11m                node-controller  Node functional-897534 event: Registered Node functional-897534 in Controller
	  Normal  NodeReady                11m                kubelet          Node functional-897534 status is now: NodeReady
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-897534 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-897534 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x8 over 11m)  kubelet          Node functional-897534 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node functional-897534 event: Registered Node functional-897534 in Controller
	
	
	==> dmesg <==
	[  +0.101295] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028366] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.196963] kauditd_printk_skb: 47 callbacks suppressed
	[Oct18 08:32] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 0e c2 2a eb 06 e6 66 0c bb bb 22 50 08 00
	[  +1.012248] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 0e c2 2a eb 06 e6 66 0c bb bb 22 50 08 00
	[  +1.023893] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 0e c2 2a eb 06 e6 66 0c bb bb 22 50 08 00
	[  +1.023849] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 0e c2 2a eb 06 e6 66 0c bb bb 22 50 08 00
	[  +1.023875] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 0e c2 2a eb 06 e6 66 0c bb bb 22 50 08 00
	[  +1.024040] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 0e c2 2a eb 06 e6 66 0c bb bb 22 50 08 00
	[  +2.047589] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 0e c2 2a eb 06 e6 66 0c bb bb 22 50 08 00
	[  +4.031586] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 0e c2 2a eb 06 e6 66 0c bb bb 22 50 08 00
	[  +8.255150] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 0e c2 2a eb 06 e6 66 0c bb bb 22 50 08 00
	[ +16.382250] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000028] ll header: 00000000: 0e c2 2a eb 06 e6 66 0c bb bb 22 50 08 00
	[Oct18 08:33] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 0e c2 2a eb 06 e6 66 0c bb bb 22 50 08 00
	
	
	==> etcd [ace7bfbc78ae57a3f61104396ba03d7db0cf979ebeab9e0f340e794873d96997] <==
	{"level":"warn","ts":"2025-10-18T08:37:14.466621Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:37:14.475948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:37:14.483112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:37:14.489275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:37:14.496252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:37:14.504539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:37:14.510401Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:37:14.516514Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:37:14.523957Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:37:14.530634Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:37:14.537162Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:37:14.543957Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:37:14.550106Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:37:14.556242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:37:14.562451Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:37:14.568755Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:37:14.574907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:37:14.591815Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:37:14.598184Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:37:14.605257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:37:14.655012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33386","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T08:38:01.117736Z","caller":"traceutil/trace.go:172","msg":"trace[558564471] transaction","detail":"{read_only:false; response_revision:763; number_of_response:1; }","duration":"175.069557ms","start":"2025-10-18T08:38:00.942644Z","end":"2025-10-18T08:38:01.117714Z","steps":["trace[558564471] 'process raft request'  (duration: 173.44987ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T08:47:14.170978Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1115}
	{"level":"info","ts":"2025-10-18T08:47:14.190533Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1115,"took":"19.195283ms","hash":967861335,"current-db-size-bytes":3383296,"current-db-size":"3.4 MB","current-db-size-in-use-bytes":1548288,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2025-10-18T08:47:14.190582Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":967861335,"revision":1115,"compact-revision":-1}
	
	
	==> etcd [ca9b31ad068cd3b7b57ac148e94ee7e2500e6b0a95b7819df4d21431a39afbea] <==
	{"level":"warn","ts":"2025-10-18T08:35:56.245389Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:35:56.251674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:35:56.258228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:35:56.269793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:35:56.276208Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:35:56.282689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:35:56.329191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36874","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T08:36:51.625148Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-18T08:36:51.625245Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-897534","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-18T08:36:51.625337Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-18T08:36:51.625443Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-18T08:36:51.626906Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T08:36:51.626961Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"warn","ts":"2025-10-18T08:36:51.626972Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-18T08:36:51.627009Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-18T08:36:51.627031Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-18T08:36:51.627034Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-18T08:36:51.627042Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T08:36:51.627038Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"error","ts":"2025-10-18T08:36:51.627044Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T08:36:51.627030Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-18T08:36:51.629014Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-18T08:36:51.629066Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T08:36:51.629085Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-18T08:36:51.629090Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-897534","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 08:47:59 up 30 min,  0 user,  load average: 0.26, 0.25, 0.34
	Linux functional-897534 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [312c48bd2093dbd3c94b968434b5767b5e0b4889ea1ad1f4884133dd47300b4e] <==
	I1018 08:45:51.585519       1 main.go:301] handling current node
	I1018 08:46:01.580013       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:46:01.580051       1 main.go:301] handling current node
	I1018 08:46:11.579624       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:46:11.579661       1 main.go:301] handling current node
	I1018 08:46:21.587080       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:46:21.587112       1 main.go:301] handling current node
	I1018 08:46:31.579519       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:46:31.579558       1 main.go:301] handling current node
	I1018 08:46:41.580265       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:46:41.580299       1 main.go:301] handling current node
	I1018 08:46:51.583231       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:46:51.583262       1 main.go:301] handling current node
	I1018 08:47:01.584233       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:47:01.584274       1 main.go:301] handling current node
	I1018 08:47:11.579316       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:47:11.579399       1 main.go:301] handling current node
	I1018 08:47:21.588479       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:47:21.588513       1 main.go:301] handling current node
	I1018 08:47:31.579869       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:47:31.579903       1 main.go:301] handling current node
	I1018 08:47:41.585079       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:47:41.585115       1 main.go:301] handling current node
	I1018 08:47:51.587867       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:47:51.587906       1 main.go:301] handling current node
	
	
	==> kindnet [7c150a4d2a6ab3e405482ea7c68f9ec7ddad15d0644da55aba7a5d9f39224279] <==
	I1018 08:36:05.370836       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 08:36:05.371095       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1018 08:36:05.371244       1 main.go:148] setting mtu 1500 for CNI 
	I1018 08:36:05.371264       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 08:36:05.371293       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T08:36:05Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 08:36:05.573098       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 08:36:05.767014       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 08:36:05.767115       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 08:36:05.767299       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 08:36:06.067430       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 08:36:06.067465       1 metrics.go:72] Registering metrics
	I1018 08:36:06.067541       1 controller.go:711] "Syncing nftables rules"
	I1018 08:36:15.574648       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:36:15.574729       1 main.go:301] handling current node
	I1018 08:36:25.577605       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:36:25.577651       1 main.go:301] handling current node
	I1018 08:36:35.574890       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:36:35.574927       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5c61ae7c4c41978897e6d398afe9f496ba5970f0951a6609ec090275cd98d90c] <==
	I1018 08:37:15.134290       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 08:37:16.018471       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 08:37:16.049108       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W1018 08:37:16.323494       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1018 08:37:16.324829       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 08:37:16.329235       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 08:37:16.793355       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 08:37:16.889112       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 08:37:16.938264       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 08:37:16.944067       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 08:37:18.495462       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 08:37:41.400876       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.100.190.248"}
	I1018 08:37:45.285694       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.104.238.10"}
	I1018 08:37:50.634270       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 08:37:50.761066       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.220.151"}
	I1018 08:37:50.774899       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.229.177"}
	I1018 08:37:57.522888       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.101.62.172"}
	I1018 08:37:57.528306       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.104.9.5"}
	E1018 08:38:06.808684       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:41132: use of closed network connection
	I1018 08:38:09.086702       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.102.22.54"}
	E1018 08:38:15.404987       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:41196: use of closed network connection
	E1018 08:38:22.215172       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:50872: use of closed network connection
	E1018 08:38:22.821479       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:50894: use of closed network connection
	E1018 08:38:25.008480       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:50908: use of closed network connection
	I1018 08:47:15.038468       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [5ce68f4a45c8c6bd3cb5248c4f7803765c1a0691b1c8486a71dee78d91fbc53b] <==
	I1018 08:36:03.713265       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 08:36:03.713385       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1018 08:36:03.713499       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 08:36:03.713515       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 08:36:03.714619       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 08:36:03.714666       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 08:36:03.714691       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 08:36:03.714722       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 08:36:03.714785       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 08:36:03.714798       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 08:36:03.714787       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1018 08:36:03.714799       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 08:36:03.716062       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1018 08:36:03.716105       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 08:36:03.716131       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 08:36:03.716163       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 08:36:03.716232       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 08:36:03.716324       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-897534"
	I1018 08:36:03.716416       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1018 08:36:03.719191       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 08:36:03.719202       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 08:36:03.719227       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 08:36:03.720379       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 08:36:03.735437       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 08:36:18.718561       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [5eddaca08ba1676b52cdfa7c25ca76e7518dfdc649dc65d6a6f73062464facb7] <==
	I1018 08:37:18.441164       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 08:37:18.441181       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 08:37:18.441191       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 08:37:18.441212       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1018 08:37:18.442320       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 08:37:18.443557       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 08:37:18.445781       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1018 08:37:18.445834       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 08:37:18.445850       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 08:37:18.445883       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 08:37:18.445896       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 08:37:18.445901       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 08:37:18.446364       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 08:37:18.448298       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 08:37:18.451506       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 08:37:18.452716       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 08:37:18.457012       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 08:37:18.458325       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 08:37:18.463587       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1018 08:37:50.697245       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 08:37:50.701384       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 08:37:50.702853       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 08:37:50.706679       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 08:37:50.706685       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 08:37:50.711025       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [0ac92f8a56d9e1e0bbe83754fbaeb6f5d3372e9339427921a9bde74c12c470fd] <==
	I1018 08:36:05.236485       1 server_linux.go:53] "Using iptables proxy"
	I1018 08:36:05.301403       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 08:36:05.401692       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 08:36:05.401765       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1018 08:36:05.401837       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 08:36:05.420010       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 08:36:05.420071       1 server_linux.go:132] "Using iptables Proxier"
	I1018 08:36:05.425253       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 08:36:05.425678       1 server.go:527] "Version info" version="v1.34.1"
	I1018 08:36:05.425710       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 08:36:05.426956       1 config.go:200] "Starting service config controller"
	I1018 08:36:05.426988       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 08:36:05.427088       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 08:36:05.427105       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 08:36:05.427135       1 config.go:309] "Starting node config controller"
	I1018 08:36:05.427162       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 08:36:05.427149       1 config.go:106] "Starting endpoint slice config controller"
	I1018 08:36:05.427195       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 08:36:05.427171       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 08:36:05.527383       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 08:36:05.527400       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 08:36:05.527483       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [9fd6a2134e403c94423ea85d37a3ffbba1b5ebbbf9a66d78d67c100c531bc9e1] <==
	E1018 08:36:41.265272       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-897534&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 08:36:42.558391       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-897534&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 08:36:44.497296       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-897534&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 08:36:50.799131       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-897534&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 08:37:10.784328       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-897534&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1018 08:37:34.365330       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 08:37:34.365372       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1018 08:37:34.365450       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 08:37:34.384855       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 08:37:34.384912       1 server_linux.go:132] "Using iptables Proxier"
	I1018 08:37:34.390407       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 08:37:34.390699       1 server.go:527] "Version info" version="v1.34.1"
	I1018 08:37:34.390728       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 08:37:34.391940       1 config.go:200] "Starting service config controller"
	I1018 08:37:34.391964       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 08:37:34.392029       1 config.go:309] "Starting node config controller"
	I1018 08:37:34.392047       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 08:37:34.392115       1 config.go:106] "Starting endpoint slice config controller"
	I1018 08:37:34.392137       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 08:37:34.392189       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 08:37:34.392199       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 08:37:34.492075       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 08:37:34.492145       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 08:37:34.492250       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 08:37:34.492274       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [0c1c4d4564e4d4ecadd66a78954a522f1a7c31fa2b4d02a3196fedb78f4e36f3] <==
	E1018 08:37:02.499409       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 08:37:02.537596       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 08:37:02.731675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 08:37:02.794143       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1018 08:37:02.849517       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 08:37:06.328057       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 08:37:07.222879       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 08:37:07.776788       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 08:37:08.687929       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 08:37:10.106522       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 08:37:12.250471       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1018 08:37:12.338186       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 08:37:12.699435       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 08:37:12.880885       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 08:37:13.121374       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:43788->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 08:37:13.121390       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:43842->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 08:37:13.121398       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:43826->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 08:37:13.121411       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:52810->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 08:37:13.121532       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:43776->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 08:37:13.121780       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:43800->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 08:37:13.121839       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:43830->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 08:37:13.121854       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:43810->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 08:37:13.122005       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:43780->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 08:37:15.037971       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1018 08:37:29.049961       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [f42ef6dbb0b1700d8ab0ed7111ea2ec9f8d9bf92257d19fef851bc2fe2f47211] <==
	E1018 08:35:56.743214       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 08:35:56.744016       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1018 08:35:56.744165       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 08:35:56.744397       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 08:35:56.745724       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 08:35:56.745851       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 08:35:57.624855       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 08:35:57.624859       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 08:35:57.626791       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 08:35:57.703470       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 08:35:57.718411       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 08:35:57.727757       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 08:35:57.727758       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 08:35:57.743359       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 08:35:57.779549       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 08:35:57.825701       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 08:35:57.836046       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 08:35:57.900116       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 08:35:58.003617       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1018 08:36:00.631869       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 08:36:40.893114       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 08:36:40.893158       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1018 08:36:40.893375       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1018 08:36:40.893400       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1018 08:36:40.893421       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 18 08:45:15 functional-897534 kubelet[4115]: E1018 08:45:15.937254    4115 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-xw6rz" podUID="74e87598-8125-4cb6-a207-f83852f22ae6"
	Oct 18 08:45:18 functional-897534 kubelet[4115]: E1018 08:45:18.936312    4115 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-wvlqc" podUID="bb780c92-02ad-4d57-b1f7-3ec2dbbc665c"
	Oct 18 08:45:29 functional-897534 kubelet[4115]: E1018 08:45:29.936827    4115 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-xw6rz" podUID="74e87598-8125-4cb6-a207-f83852f22ae6"
	Oct 18 08:45:33 functional-897534 kubelet[4115]: E1018 08:45:33.936857    4115 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-wvlqc" podUID="bb780c92-02ad-4d57-b1f7-3ec2dbbc665c"
	Oct 18 08:45:42 functional-897534 kubelet[4115]: E1018 08:45:42.936602    4115 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-xw6rz" podUID="74e87598-8125-4cb6-a207-f83852f22ae6"
	Oct 18 08:45:46 functional-897534 kubelet[4115]: E1018 08:45:46.937115    4115 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-wvlqc" podUID="bb780c92-02ad-4d57-b1f7-3ec2dbbc665c"
	Oct 18 08:45:55 functional-897534 kubelet[4115]: E1018 08:45:55.936951    4115 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-xw6rz" podUID="74e87598-8125-4cb6-a207-f83852f22ae6"
	Oct 18 08:45:58 functional-897534 kubelet[4115]: E1018 08:45:58.937091    4115 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-wvlqc" podUID="bb780c92-02ad-4d57-b1f7-3ec2dbbc665c"
	Oct 18 08:46:08 functional-897534 kubelet[4115]: E1018 08:46:08.936905    4115 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-xw6rz" podUID="74e87598-8125-4cb6-a207-f83852f22ae6"
	Oct 18 08:46:09 functional-897534 kubelet[4115]: E1018 08:46:09.936674    4115 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-wvlqc" podUID="bb780c92-02ad-4d57-b1f7-3ec2dbbc665c"
	Oct 18 08:46:20 functional-897534 kubelet[4115]: E1018 08:46:20.936223    4115 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-wvlqc" podUID="bb780c92-02ad-4d57-b1f7-3ec2dbbc665c"
	Oct 18 08:46:23 functional-897534 kubelet[4115]: E1018 08:46:23.937180    4115 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-xw6rz" podUID="74e87598-8125-4cb6-a207-f83852f22ae6"
	Oct 18 08:46:33 functional-897534 kubelet[4115]: E1018 08:46:33.936880    4115 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-wvlqc" podUID="bb780c92-02ad-4d57-b1f7-3ec2dbbc665c"
	Oct 18 08:46:38 functional-897534 kubelet[4115]: E1018 08:46:38.937032    4115 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-xw6rz" podUID="74e87598-8125-4cb6-a207-f83852f22ae6"
	Oct 18 08:46:45 functional-897534 kubelet[4115]: E1018 08:46:45.937014    4115 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-wvlqc" podUID="bb780c92-02ad-4d57-b1f7-3ec2dbbc665c"
	Oct 18 08:46:53 functional-897534 kubelet[4115]: E1018 08:46:53.937102    4115 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-xw6rz" podUID="74e87598-8125-4cb6-a207-f83852f22ae6"
	Oct 18 08:47:00 functional-897534 kubelet[4115]: E1018 08:47:00.936531    4115 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-wvlqc" podUID="bb780c92-02ad-4d57-b1f7-3ec2dbbc665c"
	Oct 18 08:47:05 functional-897534 kubelet[4115]: E1018 08:47:05.936710    4115 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-xw6rz" podUID="74e87598-8125-4cb6-a207-f83852f22ae6"
	Oct 18 08:47:14 functional-897534 kubelet[4115]: E1018 08:47:14.936875    4115 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-wvlqc" podUID="bb780c92-02ad-4d57-b1f7-3ec2dbbc665c"
	Oct 18 08:47:20 functional-897534 kubelet[4115]: E1018 08:47:20.936246    4115 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-xw6rz" podUID="74e87598-8125-4cb6-a207-f83852f22ae6"
	Oct 18 08:47:29 functional-897534 kubelet[4115]: E1018 08:47:29.936883    4115 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-wvlqc" podUID="bb780c92-02ad-4d57-b1f7-3ec2dbbc665c"
	Oct 18 08:47:33 functional-897534 kubelet[4115]: E1018 08:47:33.937664    4115 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-xw6rz" podUID="74e87598-8125-4cb6-a207-f83852f22ae6"
	Oct 18 08:47:40 functional-897534 kubelet[4115]: E1018 08:47:40.936663    4115 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-wvlqc" podUID="bb780c92-02ad-4d57-b1f7-3ec2dbbc665c"
	Oct 18 08:47:45 functional-897534 kubelet[4115]: E1018 08:47:45.938791    4115 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-xw6rz" podUID="74e87598-8125-4cb6-a207-f83852f22ae6"
	Oct 18 08:47:53 functional-897534 kubelet[4115]: E1018 08:47:53.937881    4115 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-wvlqc" podUID="bb780c92-02ad-4d57-b1f7-3ec2dbbc665c"
	
	
	==> kubernetes-dashboard [a1765dded385efdb57c67e547820a79689110845f887a01f24e4c6529429a2a9] <==
	2025/10/18 08:37:54 Using namespace: kubernetes-dashboard
	2025/10/18 08:37:54 Using in-cluster config to connect to apiserver
	2025/10/18 08:37:54 Using secret token for csrf signing
	2025/10/18 08:37:54 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 08:37:54 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 08:37:54 Successful initial request to the apiserver, version: v1.34.1
	2025/10/18 08:37:54 Generating JWE encryption key
	2025/10/18 08:37:54 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 08:37:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 08:37:54 Initializing JWE encryption key from synchronized object
	2025/10/18 08:37:54 Creating in-cluster Sidecar client
	2025/10/18 08:37:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 08:37:54 Serving insecurely on HTTP port: 9090
	2025/10/18 08:38:24 Successful request to sidecar
	2025/10/18 08:37:54 Starting overwatch
	
	
	==> storage-provisioner [40ff377338a7964dcc82246078c37ecf8faf6b286837c61f33baf9638a5d6ac7] <==
	W1018 08:47:34.178869       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:47:36.181896       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:47:36.187088       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:47:38.189917       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:47:38.193782       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:47:40.196794       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:47:40.200799       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:47:42.204041       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:47:42.208131       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:47:44.210666       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:47:44.215716       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:47:46.218511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:47:46.223775       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:47:48.227168       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:47:48.231846       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:47:50.234838       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:47:50.240507       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:47:52.243810       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:47:52.247769       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:47:54.251262       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:47:54.255797       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:47:56.258557       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:47:56.262947       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:47:58.266539       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:47:58.271031       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [ecb91ca8b66af999bd5c437dc6eb4aebc1f53b90273c4f57d0d625c77a47a949] <==
	I1018 08:36:41.162059       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 08:36:41.163546       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-897534 -n functional-897534
helpers_test.go:269: (dbg) Run:  kubectl --context functional-897534 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-wvlqc hello-node-connect-7d85dfc575-xw6rz
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-897534 describe pod busybox-mount hello-node-75c85bcc94-wvlqc hello-node-connect-7d85dfc575-xw6rz
helpers_test.go:290: (dbg) kubectl --context functional-897534 describe pod busybox-mount hello-node-75c85bcc94-wvlqc hello-node-connect-7d85dfc575-xw6rz:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-897534/192.168.49.2
	Start Time:       Sat, 18 Oct 2025 08:37:49 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  mount-munger:
	    Container ID:  cri-o://689d73421cdb9d9d21d230b960e5e7bb1dfdfa79d66eb60d8567c978833c3124
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 18 Oct 2025 08:37:50 +0000
	      Finished:     Sat, 18 Oct 2025 08:37:50 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9s6px (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-9s6px:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-897534
	  Normal  Pulling    10m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 764ms (765ms including waiting). Image size: 4631262 bytes.
	  Normal  Created    10m   kubelet            Created container: mount-munger
	  Normal  Started    10m   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-wvlqc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-897534/192.168.49.2
	Start Time:       Sat, 18 Oct 2025 08:37:45 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gn84s (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-gn84s:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-wvlqc to functional-897534
	  Normal   Pulling    7m17s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m17s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m17s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    7s (x44 over 10m)    kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     7s (x44 over 10m)    kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-xw6rz
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-897534/192.168.49.2
	Start Time:       Sat, 18 Oct 2025 08:37:57 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:           10.244.0.9
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6q9zr (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-6q9zr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-xw6rz to functional-897534
	  Normal   Pulling    7m12s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m12s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m12s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m54s (x20 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    1s (x41 over 10m)     kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (602.88s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-897534 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-897534 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-wvlqc" [bb780c92-02ad-4d57-b1f7-3ec2dbbc665c] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-897534 -n functional-897534
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-10-18 08:47:45.610897372 +0000 UTC m=+1112.938529156
functional_test.go:1460: (dbg) Run:  kubectl --context functional-897534 describe po hello-node-75c85bcc94-wvlqc -n default
functional_test.go:1460: (dbg) kubectl --context functional-897534 describe po hello-node-75c85bcc94-wvlqc -n default:
Name:             hello-node-75c85bcc94-wvlqc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-897534/192.168.49.2
Start Time:       Sat, 18 Oct 2025 08:37:45 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gn84s (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-gn84s:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-75c85bcc94-wvlqc to functional-897534
Normal   Pulling    7m2s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m2s (x5 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m2s (x5 over 10m)      kubelet            Error: ErrImagePull
Normal   BackOff    4m51s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m51s (x21 over 9m59s)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-897534 logs hello-node-75c85bcc94-wvlqc -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-897534 logs hello-node-75c85bcc94-wvlqc -n default: exit status 1 (70.182139ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-wvlqc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-897534 logs hello-node-75c85bcc94-wvlqc -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 image load --daemon kicbase/echo-server:functional-897534 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-897534" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 image load --daemon kicbase/echo-server:functional-897534 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-897534" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-897534
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 image load --daemon kicbase/echo-server:functional-897534 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-897534" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 image save kicbase/echo-server:functional-897534 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1018 08:37:50.815285   45629 out.go:360] Setting OutFile to fd 1 ...
	I1018 08:37:50.815611   45629 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:37:50.815622   45629 out.go:374] Setting ErrFile to fd 2...
	I1018 08:37:50.815629   45629 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:37:50.815854   45629 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-5897/.minikube/bin
	I1018 08:37:50.816689   45629 config.go:182] Loaded profile config "functional-897534": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:37:50.816867   45629 config.go:182] Loaded profile config "functional-897534": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:37:50.817434   45629 cli_runner.go:164] Run: docker container inspect functional-897534 --format={{.State.Status}}
	I1018 08:37:50.837867   45629 ssh_runner.go:195] Run: systemctl --version
	I1018 08:37:50.837927   45629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-897534
	I1018 08:37:50.858778   45629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/functional-897534/id_rsa Username:docker}
	I1018 08:37:50.957152   45629 cache_images.go:290] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W1018 08:37:50.957220   45629 cache_images.go:254] Failed to load cached images for "functional-897534": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I1018 08:37:50.957246   45629 cache_images.go:266] failed pushing to: functional-897534

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-897534
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 image save --daemon kicbase/echo-server:functional-897534 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-897534
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-897534: exit status 1 (17.544674ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-897534

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-897534

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-897534 service --namespace=default --https --url hello-node: exit status 115 (524.488132ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:31476
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-897534 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-897534 service hello-node --url --format={{.IP}}: exit status 115 (528.917178ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-897534 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-897534 service hello-node --url: exit status 115 (533.199208ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:31476
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-897534 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31476
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.53s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.62s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-162687 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-162687 --output=json --user=testUser: exit status 80 (1.622582908s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"219b1478-e833-413a-85a6-4e2a85fc37b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-162687 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"163847dc-da26-4fa9-84d7-6295ffc2c494","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-18T08:56:44Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"fbae8b04-097a-4006-8e8b-ed1f794432a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-162687 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (1.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.96s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-162687 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-162687 --output=json --user=testUser: exit status 80 (1.963459348s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2ab4ba37-03c2-4a3c-9eab-2ef926c17e55","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-162687 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"136c4abb-c30b-4f57-b4ce-d84ef62fa675","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-18T08:56:46Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"53f9acaf-8b4f-47a8-8869-0e5226412420","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-162687 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.96s)

                                                
                                    
x
+
TestPause/serial/Pause (5.32s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-182020 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-182020 --alsologtostderr -v=5: exit status 80 (1.601048892s)

                                                
                                                
-- stdout --
	* Pausing node pause-182020 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:11:53.690539  229172 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:11:53.690660  229172 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:11:53.690670  229172 out.go:374] Setting ErrFile to fd 2...
	I1018 09:11:53.690675  229172 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:11:53.691114  229172 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-5897/.minikube/bin
	I1018 09:11:53.691406  229172 out.go:368] Setting JSON to false
	I1018 09:11:53.691445  229172 mustload.go:65] Loading cluster: pause-182020
	I1018 09:11:53.691811  229172 config.go:182] Loaded profile config "pause-182020": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:11:53.692187  229172 cli_runner.go:164] Run: docker container inspect pause-182020 --format={{.State.Status}}
	I1018 09:11:53.711698  229172 host.go:66] Checking if "pause-182020" exists ...
	I1018 09:11:53.712071  229172 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:11:53.776740  229172 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:86 SystemTime:2025-10-18 09:11:53.764680381 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:11:53.777392  229172 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-182020 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1018 09:11:53.779311  229172 out.go:179] * Pausing node pause-182020 ... 
	I1018 09:11:53.780553  229172 host.go:66] Checking if "pause-182020" exists ...
	I1018 09:11:53.780836  229172 ssh_runner.go:195] Run: systemctl --version
	I1018 09:11:53.780872  229172 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-182020
	I1018 09:11:53.802061  229172 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/pause-182020/id_rsa Username:docker}
	I1018 09:11:53.901576  229172 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:11:53.915111  229172 pause.go:52] kubelet running: true
	I1018 09:11:53.915178  229172 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 09:11:54.058388  229172 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 09:11:54.058495  229172 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 09:11:54.139692  229172 cri.go:89] found id: "f5ca822f709a1b9798c695190a4248e26ef175c5318934a88c07b7b1904bff04"
	I1018 09:11:54.139715  229172 cri.go:89] found id: "d7a65a8f4fe938ccc6475ccee4fbc8b3e900a2ff7eddc8b7274ea535e3bfcea8"
	I1018 09:11:54.139720  229172 cri.go:89] found id: "eaf98db334b9006e58ac3d2f70ad88ace993203f90b34acea96e2f6ddfbeaec8"
	I1018 09:11:54.139725  229172 cri.go:89] found id: "8807016bdc6e7d79c7d0284d142aa731ef1ac4dc315dba67d96ef49d194d63d2"
	I1018 09:11:54.139730  229172 cri.go:89] found id: "04e5f45aeb8859409b77f6e8ae85fde70680a8b11a8d7d93b61866eee5c7d370"
	I1018 09:11:54.139734  229172 cri.go:89] found id: "12a42de0cfa9baf1a98897014767f47b63a1c6ae213ffaf923be29be984231c3"
	I1018 09:11:54.139739  229172 cri.go:89] found id: "d92c7b180dca0476851d7f6127a563b17c31e334cb49a9c36a31c2355cb3eeae"
	I1018 09:11:54.139743  229172 cri.go:89] found id: ""
	I1018 09:11:54.139784  229172 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:11:54.153164  229172 retry.go:31] will retry after 201.583044ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:11:54Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:11:54.355872  229172 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:11:54.370731  229172 pause.go:52] kubelet running: false
	I1018 09:11:54.370780  229172 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 09:11:54.524005  229172 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 09:11:54.524114  229172 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 09:11:54.605548  229172 cri.go:89] found id: "f5ca822f709a1b9798c695190a4248e26ef175c5318934a88c07b7b1904bff04"
	I1018 09:11:54.605575  229172 cri.go:89] found id: "d7a65a8f4fe938ccc6475ccee4fbc8b3e900a2ff7eddc8b7274ea535e3bfcea8"
	I1018 09:11:54.605584  229172 cri.go:89] found id: "eaf98db334b9006e58ac3d2f70ad88ace993203f90b34acea96e2f6ddfbeaec8"
	I1018 09:11:54.605589  229172 cri.go:89] found id: "8807016bdc6e7d79c7d0284d142aa731ef1ac4dc315dba67d96ef49d194d63d2"
	I1018 09:11:54.605594  229172 cri.go:89] found id: "04e5f45aeb8859409b77f6e8ae85fde70680a8b11a8d7d93b61866eee5c7d370"
	I1018 09:11:54.605598  229172 cri.go:89] found id: "12a42de0cfa9baf1a98897014767f47b63a1c6ae213ffaf923be29be984231c3"
	I1018 09:11:54.605602  229172 cri.go:89] found id: "d92c7b180dca0476851d7f6127a563b17c31e334cb49a9c36a31c2355cb3eeae"
	I1018 09:11:54.605604  229172 cri.go:89] found id: ""
	I1018 09:11:54.605653  229172 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:11:54.618594  229172 retry.go:31] will retry after 396.926766ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:11:54Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:11:55.016239  229172 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:11:55.029860  229172 pause.go:52] kubelet running: false
	I1018 09:11:55.029923  229172 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 09:11:55.140385  229172 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 09:11:55.140475  229172 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 09:11:55.219001  229172 cri.go:89] found id: "f5ca822f709a1b9798c695190a4248e26ef175c5318934a88c07b7b1904bff04"
	I1018 09:11:55.219022  229172 cri.go:89] found id: "d7a65a8f4fe938ccc6475ccee4fbc8b3e900a2ff7eddc8b7274ea535e3bfcea8"
	I1018 09:11:55.219036  229172 cri.go:89] found id: "eaf98db334b9006e58ac3d2f70ad88ace993203f90b34acea96e2f6ddfbeaec8"
	I1018 09:11:55.219042  229172 cri.go:89] found id: "8807016bdc6e7d79c7d0284d142aa731ef1ac4dc315dba67d96ef49d194d63d2"
	I1018 09:11:55.219047  229172 cri.go:89] found id: "04e5f45aeb8859409b77f6e8ae85fde70680a8b11a8d7d93b61866eee5c7d370"
	I1018 09:11:55.219052  229172 cri.go:89] found id: "12a42de0cfa9baf1a98897014767f47b63a1c6ae213ffaf923be29be984231c3"
	I1018 09:11:55.219056  229172 cri.go:89] found id: "d92c7b180dca0476851d7f6127a563b17c31e334cb49a9c36a31c2355cb3eeae"
	I1018 09:11:55.219060  229172 cri.go:89] found id: ""
	I1018 09:11:55.219110  229172 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:11:55.234965  229172 out.go:203] 
	W1018 09:11:55.236945  229172 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:11:55Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:11:55Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 09:11:55.236975  229172 out.go:285] * 
	* 
	W1018 09:11:55.241621  229172 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 09:11:55.243129  229172 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-182020 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-182020
helpers_test.go:243: (dbg) docker inspect pause-182020:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "baf0e51c83d2f433af5a5a4743421fc7956e698e7fab5fe7c0aec16c33c5044b",
	        "Created": "2025-10-18T09:10:42.215698218Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 213968,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T09:10:42.264857749Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/baf0e51c83d2f433af5a5a4743421fc7956e698e7fab5fe7c0aec16c33c5044b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/baf0e51c83d2f433af5a5a4743421fc7956e698e7fab5fe7c0aec16c33c5044b/hostname",
	        "HostsPath": "/var/lib/docker/containers/baf0e51c83d2f433af5a5a4743421fc7956e698e7fab5fe7c0aec16c33c5044b/hosts",
	        "LogPath": "/var/lib/docker/containers/baf0e51c83d2f433af5a5a4743421fc7956e698e7fab5fe7c0aec16c33c5044b/baf0e51c83d2f433af5a5a4743421fc7956e698e7fab5fe7c0aec16c33c5044b-json.log",
	        "Name": "/pause-182020",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-182020:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-182020",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "baf0e51c83d2f433af5a5a4743421fc7956e698e7fab5fe7c0aec16c33c5044b",
	                "LowerDir": "/var/lib/docker/overlay2/bf5a2bf4a307ee5fc5456cb16516a6e5f3e5897435b46d49813412d2bda54460-init/diff:/var/lib/docker/overlay2/76f783f469ac4c930bc111d7df4bd2b3a57bdcd762971c7ce0ba7a7b959771a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bf5a2bf4a307ee5fc5456cb16516a6e5f3e5897435b46d49813412d2bda54460/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bf5a2bf4a307ee5fc5456cb16516a6e5f3e5897435b46d49813412d2bda54460/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bf5a2bf4a307ee5fc5456cb16516a6e5f3e5897435b46d49813412d2bda54460/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-182020",
	                "Source": "/var/lib/docker/volumes/pause-182020/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-182020",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-182020",
	                "name.minikube.sigs.k8s.io": "pause-182020",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "eadaf7d9dd66fe2ddd99d19b0879ed1750bebcc8f22f317cf111488bb5623698",
	            "SandboxKey": "/var/run/docker/netns/eadaf7d9dd66",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33033"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33034"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33037"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33035"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33036"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-182020": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2e:3a:53:cd:4e:b0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "34e553f31b74848eb25394f53660a6e6a3a7608b4a92afc6c1411bb7365b42f1",
	                    "EndpointID": "8bcc8d78c83c4629118298bf34719c2b26ab4ce9c959efde052fc30e25e69c9a",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-182020",
	                        "baf0e51c83d2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-182020 -n pause-182020
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-182020 -n pause-182020: exit status 2 (342.551244ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-182020 logs -n 25
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                           ARGS                                                                                                            │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p force-systemd-flag-619251 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                               │ force-systemd-flag-619251 │ jenkins │ v1.37.0 │ 18 Oct 25 09:09 UTC │ 18 Oct 25 09:10 UTC │
	│ start   │ -p running-upgrade-152288 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                  │ running-upgrade-152288    │ jenkins │ v1.37.0 │ 18 Oct 25 09:10 UTC │ 18 Oct 25 09:10 UTC │
	│ delete  │ -p missing-upgrade-196626                                                                                                                                                                                                 │ missing-upgrade-196626    │ jenkins │ v1.37.0 │ 18 Oct 25 09:10 UTC │ 18 Oct 25 09:10 UTC │
	│ start   │ -p force-systemd-env-980759 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                │ force-systemd-env-980759  │ jenkins │ v1.37.0 │ 18 Oct 25 09:10 UTC │ 18 Oct 25 09:10 UTC │
	│ ssh     │ force-systemd-flag-619251 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                      │ force-systemd-flag-619251 │ jenkins │ v1.37.0 │ 18 Oct 25 09:10 UTC │ 18 Oct 25 09:10 UTC │
	│ delete  │ -p force-systemd-flag-619251                                                                                                                                                                                              │ force-systemd-flag-619251 │ jenkins │ v1.37.0 │ 18 Oct 25 09:10 UTC │ 18 Oct 25 09:10 UTC │
	│ start   │ -p cert-expiration-558693 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                    │ cert-expiration-558693    │ jenkins │ v1.37.0 │ 18 Oct 25 09:10 UTC │ 18 Oct 25 09:10 UTC │
	│ delete  │ -p running-upgrade-152288                                                                                                                                                                                                 │ running-upgrade-152288    │ jenkins │ v1.37.0 │ 18 Oct 25 09:10 UTC │ 18 Oct 25 09:10 UTC │
	│ delete  │ -p force-systemd-env-980759                                                                                                                                                                                               │ force-systemd-env-980759  │ jenkins │ v1.37.0 │ 18 Oct 25 09:10 UTC │ 18 Oct 25 09:10 UTC │
	│ start   │ -p cert-options-043492 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio │ cert-options-043492       │ jenkins │ v1.37.0 │ 18 Oct 25 09:10 UTC │ 18 Oct 25 09:10 UTC │
	│ start   │ -p pause-182020 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                                                                                                 │ pause-182020              │ jenkins │ v1.37.0 │ 18 Oct 25 09:10 UTC │ 18 Oct 25 09:11 UTC │
	│ ssh     │ cert-options-043492 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                               │ cert-options-043492       │ jenkins │ v1.37.0 │ 18 Oct 25 09:10 UTC │ 18 Oct 25 09:10 UTC │
	│ ssh     │ -p cert-options-043492 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                             │ cert-options-043492       │ jenkins │ v1.37.0 │ 18 Oct 25 09:10 UTC │ 18 Oct 25 09:10 UTC │
	│ delete  │ -p cert-options-043492                                                                                                                                                                                                    │ cert-options-043492       │ jenkins │ v1.37.0 │ 18 Oct 25 09:10 UTC │ 18 Oct 25 09:11 UTC │
	│ start   │ -p NoKubernetes-548249 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                             │ NoKubernetes-548249       │ jenkins │ v1.37.0 │ 18 Oct 25 09:11 UTC │                     │
	│ start   │ -p NoKubernetes-548249 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                     │ NoKubernetes-548249       │ jenkins │ v1.37.0 │ 18 Oct 25 09:11 UTC │ 18 Oct 25 09:11 UTC │
	│ start   │ -p NoKubernetes-548249 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                     │ NoKubernetes-548249       │ jenkins │ v1.37.0 │ 18 Oct 25 09:11 UTC │ 18 Oct 25 09:11 UTC │
	│ delete  │ -p NoKubernetes-548249                                                                                                                                                                                                    │ NoKubernetes-548249       │ jenkins │ v1.37.0 │ 18 Oct 25 09:11 UTC │ 18 Oct 25 09:11 UTC │
	│ start   │ -p NoKubernetes-548249 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                     │ NoKubernetes-548249       │ jenkins │ v1.37.0 │ 18 Oct 25 09:11 UTC │ 18 Oct 25 09:11 UTC │
	│ ssh     │ -p NoKubernetes-548249 sudo systemctl is-active --quiet service kubelet                                                                                                                                                   │ NoKubernetes-548249       │ jenkins │ v1.37.0 │ 18 Oct 25 09:11 UTC │                     │
	│ stop    │ -p NoKubernetes-548249                                                                                                                                                                                                    │ NoKubernetes-548249       │ jenkins │ v1.37.0 │ 18 Oct 25 09:11 UTC │ 18 Oct 25 09:11 UTC │
	│ start   │ -p pause-182020 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                          │ pause-182020              │ jenkins │ v1.37.0 │ 18 Oct 25 09:11 UTC │ 18 Oct 25 09:11 UTC │
	│ start   │ -p NoKubernetes-548249 --driver=docker  --container-runtime=crio                                                                                                                                                          │ NoKubernetes-548249       │ jenkins │ v1.37.0 │ 18 Oct 25 09:11 UTC │ 18 Oct 25 09:11 UTC │
	│ pause   │ -p pause-182020 --alsologtostderr -v=5                                                                                                                                                                                    │ pause-182020              │ jenkins │ v1.37.0 │ 18 Oct 25 09:11 UTC │                     │
	│ ssh     │ -p NoKubernetes-548249 sudo systemctl is-active --quiet service kubelet                                                                                                                                                   │ NoKubernetes-548249       │ jenkins │ v1.37.0 │ 18 Oct 25 09:11 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 09:11:48
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 09:11:48.756862  227604 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:11:48.757168  227604 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:11:48.757173  227604 out.go:374] Setting ErrFile to fd 2...
	I1018 09:11:48.757178  227604 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:11:48.757487  227604 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-5897/.minikube/bin
	I1018 09:11:48.757947  227604 out.go:368] Setting JSON to false
	I1018 09:11:48.759174  227604 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3257,"bootTime":1760775452,"procs":288,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 09:11:48.759258  227604 start.go:141] virtualization: kvm guest
	I1018 09:11:48.761298  227604 out.go:179] * [NoKubernetes-548249] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 09:11:48.763160  227604 out.go:179]   - MINIKUBE_LOCATION=21767
	I1018 09:11:48.763161  227604 notify.go:220] Checking for updates...
	I1018 09:11:48.764602  227604 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:11:48.766399  227604 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-5897/kubeconfig
	I1018 09:11:48.767778  227604 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-5897/.minikube
	I1018 09:11:48.769106  227604 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 09:11:48.770203  227604 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:11:48.772079  227604 config.go:182] Loaded profile config "NoKubernetes-548249": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1018 09:11:48.772788  227604 start.go:1804] No Kubernetes version set for minikube, setting Kubernetes version to v0.0.0
	I1018 09:11:48.772809  227604 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:11:48.801542  227604 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 09:11:48.801631  227604 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:11:48.867868  227604 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-18 09:11:48.856621704 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:11:48.867997  227604 docker.go:318] overlay module found
	I1018 09:11:48.869707  227604 out.go:179] * Using the docker driver based on existing profile
	I1018 09:11:48.870733  227604 start.go:305] selected driver: docker
	I1018 09:11:48.870740  227604 start.go:925] validating driver "docker" against &{Name:NoKubernetes-548249 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-548249 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:11:48.870811  227604 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:11:48.870886  227604 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:11:48.934015  227604 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-18 09:11:48.924031616 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:11:48.934870  227604 cni.go:84] Creating CNI manager for ""
	I1018 09:11:48.934934  227604 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:11:48.934987  227604 start.go:349] cluster config:
	{Name:NoKubernetes-548249 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-548249 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:11:48.937423  227604 out.go:179] * Starting minikube without Kubernetes in cluster NoKubernetes-548249
	I1018 09:11:48.938499  227604 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 09:11:48.939774  227604 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 09:11:48.940850  227604 preload.go:183] Checking if preload exists for k8s version v0.0.0 and runtime crio
	I1018 09:11:48.940973  227604 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 09:11:48.962777  227604 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 09:11:48.962792  227604 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	W1018 09:11:48.974619  227604 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	W1018 09:11:49.062127  227604 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1018 09:11:49.062294  227604 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/NoKubernetes-548249/config.json ...
	I1018 09:11:49.062612  227604 cache.go:232] Successfully downloaded all kic artifacts
	I1018 09:11:49.062642  227604 start.go:360] acquireMachinesLock for NoKubernetes-548249: {Name:mk4f850bb94a692e49c3051cb30c34ec05dbe073 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:11:49.062723  227604 start.go:364] duration metric: took 50.017µs to acquireMachinesLock for "NoKubernetes-548249"
	I1018 09:11:49.062739  227604 start.go:96] Skipping create...Using existing machine configuration
	I1018 09:11:49.062744  227604 fix.go:54] fixHost starting: 
	I1018 09:11:49.063055  227604 cli_runner.go:164] Run: docker container inspect NoKubernetes-548249 --format={{.State.Status}}
	I1018 09:11:49.082776  227604 fix.go:112] recreateIfNeeded on NoKubernetes-548249: state=Stopped err=<nil>
	W1018 09:11:49.082796  227604 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 09:11:47.753419  189686 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:11:47.753889  189686 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:11:47.753956  189686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 09:11:47.754017  189686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 09:11:47.797776  189686 cri.go:89] found id: "7027c49f3b8f3e631a42588b854b959dc67b4015fbdcf089da949a1930646b4b"
	I1018 09:11:47.797799  189686 cri.go:89] found id: ""
	I1018 09:11:47.797809  189686 logs.go:282] 1 containers: [7027c49f3b8f3e631a42588b854b959dc67b4015fbdcf089da949a1930646b4b]
	I1018 09:11:47.797867  189686 ssh_runner.go:195] Run: which crictl
	I1018 09:11:47.802491  189686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 09:11:47.802573  189686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 09:11:47.836243  189686 cri.go:89] found id: ""
	I1018 09:11:47.836331  189686 logs.go:282] 0 containers: []
	W1018 09:11:47.836357  189686 logs.go:284] No container was found matching "etcd"
	I1018 09:11:47.836365  189686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 09:11:47.836435  189686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 09:11:47.873575  189686 cri.go:89] found id: ""
	I1018 09:11:47.873666  189686 logs.go:282] 0 containers: []
	W1018 09:11:47.873694  189686 logs.go:284] No container was found matching "coredns"
	I1018 09:11:47.873704  189686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 09:11:47.873812  189686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 09:11:47.905125  189686 cri.go:89] found id: "86f01e214f69dadfddb8202ec7a69d76aaed7800772fd9291732c69639a6ff2e"
	I1018 09:11:47.905153  189686 cri.go:89] found id: ""
	I1018 09:11:47.905163  189686 logs.go:282] 1 containers: [86f01e214f69dadfddb8202ec7a69d76aaed7800772fd9291732c69639a6ff2e]
	I1018 09:11:47.905220  189686 ssh_runner.go:195] Run: which crictl
	I1018 09:11:47.909675  189686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 09:11:47.909764  189686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 09:11:47.942653  189686 cri.go:89] found id: ""
	I1018 09:11:47.942684  189686 logs.go:282] 0 containers: []
	W1018 09:11:47.942696  189686 logs.go:284] No container was found matching "kube-proxy"
	I1018 09:11:47.942703  189686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 09:11:47.942766  189686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 09:11:47.974447  189686 cri.go:89] found id: "20ce98cee7a65c567d8e5cab3d913b60c862afd25295d419c93e4cdab6c05312"
	I1018 09:11:47.974467  189686 cri.go:89] found id: ""
	I1018 09:11:47.974474  189686 logs.go:282] 1 containers: [20ce98cee7a65c567d8e5cab3d913b60c862afd25295d419c93e4cdab6c05312]
	I1018 09:11:47.974526  189686 ssh_runner.go:195] Run: which crictl
	I1018 09:11:47.978752  189686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 09:11:47.978829  189686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 09:11:48.008790  189686 cri.go:89] found id: ""
	I1018 09:11:48.008812  189686 logs.go:282] 0 containers: []
	W1018 09:11:48.008819  189686 logs.go:284] No container was found matching "kindnet"
	I1018 09:11:48.008825  189686 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 09:11:48.008868  189686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 09:11:48.041665  189686 cri.go:89] found id: ""
	I1018 09:11:48.041703  189686 logs.go:282] 0 containers: []
	W1018 09:11:48.041714  189686 logs.go:284] No container was found matching "storage-provisioner"
	I1018 09:11:48.041726  189686 logs.go:123] Gathering logs for CRI-O ...
	I1018 09:11:48.041740  189686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 09:11:48.094901  189686 logs.go:123] Gathering logs for container status ...
	I1018 09:11:48.094942  189686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 09:11:48.130656  189686 logs.go:123] Gathering logs for kubelet ...
	I1018 09:11:48.130694  189686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 09:11:48.222261  189686 logs.go:123] Gathering logs for dmesg ...
	I1018 09:11:48.222293  189686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 09:11:48.238937  189686 logs.go:123] Gathering logs for describe nodes ...
	I1018 09:11:48.238968  189686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 09:11:48.300040  189686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 09:11:48.300069  189686 logs.go:123] Gathering logs for kube-apiserver [7027c49f3b8f3e631a42588b854b959dc67b4015fbdcf089da949a1930646b4b] ...
	I1018 09:11:48.300085  189686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7027c49f3b8f3e631a42588b854b959dc67b4015fbdcf089da949a1930646b4b"
	I1018 09:11:48.337429  189686 logs.go:123] Gathering logs for kube-scheduler [86f01e214f69dadfddb8202ec7a69d76aaed7800772fd9291732c69639a6ff2e] ...
	I1018 09:11:48.337464  189686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 86f01e214f69dadfddb8202ec7a69d76aaed7800772fd9291732c69639a6ff2e"
	I1018 09:11:48.391528  189686 logs.go:123] Gathering logs for kube-controller-manager [20ce98cee7a65c567d8e5cab3d913b60c862afd25295d419c93e4cdab6c05312] ...
	I1018 09:11:48.391560  189686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 20ce98cee7a65c567d8e5cab3d913b60c862afd25295d419c93e4cdab6c05312"
	I1018 09:11:47.947904  227009 out.go:252] * Updating the running docker "pause-182020" container ...
	I1018 09:11:47.947938  227009 machine.go:93] provisionDockerMachine start ...
	I1018 09:11:47.948012  227009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-182020
	I1018 09:11:47.970012  227009 main.go:141] libmachine: Using SSH client type: native
	I1018 09:11:47.970268  227009 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33033 <nil> <nil>}
	I1018 09:11:47.970281  227009 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:11:48.126230  227009 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-182020
	
	I1018 09:11:48.126261  227009 ubuntu.go:182] provisioning hostname "pause-182020"
	I1018 09:11:48.126331  227009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-182020
	I1018 09:11:48.146923  227009 main.go:141] libmachine: Using SSH client type: native
	I1018 09:11:48.147185  227009 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33033 <nil> <nil>}
	I1018 09:11:48.147201  227009 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-182020 && echo "pause-182020" | sudo tee /etc/hostname
	I1018 09:11:48.297730  227009 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-182020
	
	I1018 09:11:48.297812  227009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-182020
	I1018 09:11:48.318678  227009 main.go:141] libmachine: Using SSH client type: native
	I1018 09:11:48.318984  227009 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33033 <nil> <nil>}
	I1018 09:11:48.319010  227009 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-182020' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-182020/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-182020' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:11:48.458105  227009 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:11:48.458131  227009 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-5897/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-5897/.minikube}
	I1018 09:11:48.458147  227009 ubuntu.go:190] setting up certificates
	I1018 09:11:48.458158  227009 provision.go:84] configureAuth start
	I1018 09:11:48.458228  227009 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-182020
	I1018 09:11:48.477775  227009 provision.go:143] copyHostCerts
	I1018 09:11:48.477836  227009 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem, removing ...
	I1018 09:11:48.477864  227009 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem
	I1018 09:11:48.477952  227009 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem (1078 bytes)
	I1018 09:11:48.478086  227009 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem, removing ...
	I1018 09:11:48.478101  227009 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem
	I1018 09:11:48.478145  227009 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem (1123 bytes)
	I1018 09:11:48.478255  227009 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem, removing ...
	I1018 09:11:48.478267  227009 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem
	I1018 09:11:48.478305  227009 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem (1675 bytes)
	I1018 09:11:48.478417  227009 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem org=jenkins.pause-182020 san=[127.0.0.1 192.168.103.2 localhost minikube pause-182020]
	I1018 09:11:48.624138  227009 provision.go:177] copyRemoteCerts
	I1018 09:11:48.624193  227009 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:11:48.624232  227009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-182020
	I1018 09:11:48.644806  227009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/pause-182020/id_rsa Username:docker}
	I1018 09:11:48.744883  227009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:11:48.765175  227009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1018 09:11:48.786491  227009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 09:11:48.808479  227009 provision.go:87] duration metric: took 350.308619ms to configureAuth
	I1018 09:11:48.808529  227009 ubuntu.go:206] setting minikube options for container-runtime
	I1018 09:11:48.808764  227009 config.go:182] Loaded profile config "pause-182020": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:11:48.808885  227009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-182020
	I1018 09:11:48.830885  227009 main.go:141] libmachine: Using SSH client type: native
	I1018 09:11:48.831200  227009 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33033 <nil> <nil>}
	I1018 09:11:48.831226  227009 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:11:49.137749  227009 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:11:49.137774  227009 machine.go:96] duration metric: took 1.189827356s to provisionDockerMachine
	I1018 09:11:49.137787  227009 start.go:293] postStartSetup for "pause-182020" (driver="docker")
	I1018 09:11:49.137799  227009 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:11:49.137858  227009 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:11:49.137920  227009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-182020
	I1018 09:11:49.158515  227009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/pause-182020/id_rsa Username:docker}
	I1018 09:11:49.260998  227009 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:11:49.265288  227009 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 09:11:49.265320  227009 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 09:11:49.265333  227009 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-5897/.minikube/addons for local assets ...
	I1018 09:11:49.265409  227009 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-5897/.minikube/files for local assets ...
	I1018 09:11:49.265523  227009 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem -> 93942.pem in /etc/ssl/certs
	I1018 09:11:49.265645  227009 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:11:49.273924  227009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem --> /etc/ssl/certs/93942.pem (1708 bytes)
	I1018 09:11:49.293360  227009 start.go:296] duration metric: took 155.545423ms for postStartSetup
	I1018 09:11:49.293496  227009 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:11:49.293557  227009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-182020
	I1018 09:11:49.314368  227009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/pause-182020/id_rsa Username:docker}
	I1018 09:11:49.412866  227009 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 09:11:49.419084  227009 fix.go:56] duration metric: took 1.495499236s for fixHost
	I1018 09:11:49.419107  227009 start.go:83] releasing machines lock for "pause-182020", held for 1.49554297s
	I1018 09:11:49.419161  227009 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-182020
	I1018 09:11:49.438223  227009 ssh_runner.go:195] Run: cat /version.json
	I1018 09:11:49.438278  227009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-182020
	I1018 09:11:49.438306  227009 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:11:49.438391  227009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-182020
	I1018 09:11:49.458939  227009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/pause-182020/id_rsa Username:docker}
	I1018 09:11:49.459691  227009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/pause-182020/id_rsa Username:docker}
	I1018 09:11:49.625141  227009 ssh_runner.go:195] Run: systemctl --version
	I1018 09:11:49.632022  227009 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:11:49.679764  227009 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:11:49.685559  227009 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:11:49.685629  227009 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:11:49.694663  227009 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 09:11:49.694686  227009 start.go:495] detecting cgroup driver to use...
	I1018 09:11:49.694720  227009 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 09:11:49.694759  227009 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:11:49.711957  227009 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:11:49.725459  227009 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:11:49.725522  227009 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:11:49.742476  227009 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:11:49.756466  227009 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:11:49.868275  227009 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:11:49.979406  227009 docker.go:234] disabling docker service ...
	I1018 09:11:49.979480  227009 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:11:49.994864  227009 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:11:50.008366  227009 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:11:50.116263  227009 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:11:50.229932  227009 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:11:50.243903  227009 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:11:50.260011  227009 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 09:11:50.260067  227009 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:11:50.271370  227009 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 09:11:50.271441  227009 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:11:50.281185  227009 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:11:50.290893  227009 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:11:50.300647  227009 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:11:50.309429  227009 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:11:50.318945  227009 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:11:50.328441  227009 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:11:50.338249  227009 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:11:50.346364  227009 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:11:50.354889  227009 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:11:50.460260  227009 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:11:50.615826  227009 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:11:50.615886  227009 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:11:50.620528  227009 start.go:563] Will wait 60s for crictl version
	I1018 09:11:50.620586  227009 ssh_runner.go:195] Run: which crictl
	I1018 09:11:50.625263  227009 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 09:11:50.650201  227009 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 09:11:50.650281  227009 ssh_runner.go:195] Run: crio --version
	I1018 09:11:50.679769  227009 ssh_runner.go:195] Run: crio --version
	I1018 09:11:50.711204  227009 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 09:11:50.712601  227009 cli_runner.go:164] Run: docker network inspect pause-182020 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:11:50.731466  227009 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1018 09:11:50.736136  227009 kubeadm.go:883] updating cluster {Name:pause-182020 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-182020 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regis
try-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:11:50.736292  227009 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:11:50.736338  227009 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:11:50.769211  227009 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:11:50.769234  227009 crio.go:433] Images already preloaded, skipping extraction
	I1018 09:11:50.769286  227009 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:11:50.795215  227009 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:11:50.795239  227009 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:11:50.795250  227009 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1018 09:11:50.795389  227009 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-182020 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-182020 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:11:50.795474  227009 ssh_runner.go:195] Run: crio config
	I1018 09:11:50.841887  227009 cni.go:84] Creating CNI manager for ""
	I1018 09:11:50.841905  227009 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:11:50.841917  227009 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 09:11:50.841936  227009 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-182020 NodeName:pause-182020 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:11:50.842070  227009 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-182020"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:11:50.842129  227009 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 09:11:50.850866  227009 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:11:50.850930  227009 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:11:50.859178  227009 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1018 09:11:50.873318  227009 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:11:50.886880  227009 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1018 09:11:50.900234  227009 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1018 09:11:50.904356  227009 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:11:51.025130  227009 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:11:51.040251  227009 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/pause-182020 for IP: 192.168.103.2
	I1018 09:11:51.040269  227009 certs.go:195] generating shared ca certs ...
	I1018 09:11:51.040283  227009 certs.go:227] acquiring lock for ca certs: {Name:mk550b60d986fbbdf7b5e0015c56234b739f3162 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:11:51.040476  227009 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-5897/.minikube/ca.key
	I1018 09:11:51.040526  227009 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.key
	I1018 09:11:51.040539  227009 certs.go:257] generating profile certs ...
	I1018 09:11:51.040635  227009 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/pause-182020/client.key
	I1018 09:11:51.040726  227009 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/pause-182020/apiserver.key.71be5d44
	I1018 09:11:51.040785  227009 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/pause-182020/proxy-client.key
	I1018 09:11:51.040926  227009 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/9394.pem (1338 bytes)
	W1018 09:11:51.040968  227009 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-5897/.minikube/certs/9394_empty.pem, impossibly tiny 0 bytes
	I1018 09:11:51.040979  227009 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 09:11:51.041019  227009 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:11:51.041050  227009 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:11:51.041076  227009 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem (1675 bytes)
	I1018 09:11:51.041131  227009 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem (1708 bytes)
	I1018 09:11:51.041928  227009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:11:51.063571  227009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 09:11:51.084104  227009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:11:51.104545  227009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 09:11:51.124915  227009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/pause-182020/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1018 09:11:51.146461  227009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/pause-182020/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 09:11:51.166121  227009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/pause-182020/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:11:51.186287  227009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/pause-182020/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1018 09:11:51.206889  227009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:11:51.227271  227009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/certs/9394.pem --> /usr/share/ca-certificates/9394.pem (1338 bytes)
	I1018 09:11:51.248204  227009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem --> /usr/share/ca-certificates/93942.pem (1708 bytes)
	I1018 09:11:51.269233  227009 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:11:51.284060  227009 ssh_runner.go:195] Run: openssl version
	I1018 09:11:51.290675  227009 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:11:51.299969  227009 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:11:51.304066  227009 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:11:51.304118  227009 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:11:51.343765  227009 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:11:51.352322  227009 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9394.pem && ln -fs /usr/share/ca-certificates/9394.pem /etc/ssl/certs/9394.pem"
	I1018 09:11:51.362547  227009 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9394.pem
	I1018 09:11:51.366485  227009 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 08:35 /usr/share/ca-certificates/9394.pem
	I1018 09:11:51.366552  227009 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9394.pem
	I1018 09:11:51.402911  227009 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9394.pem /etc/ssl/certs/51391683.0"
	I1018 09:11:51.412173  227009 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93942.pem && ln -fs /usr/share/ca-certificates/93942.pem /etc/ssl/certs/93942.pem"
	I1018 09:11:51.421497  227009 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93942.pem
	I1018 09:11:51.425780  227009 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 08:35 /usr/share/ca-certificates/93942.pem
	I1018 09:11:51.425846  227009 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93942.pem
	I1018 09:11:51.463712  227009 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93942.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:11:51.473138  227009 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:11:51.477617  227009 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 09:11:51.515747  227009 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 09:11:51.553825  227009 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 09:11:51.591223  227009 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 09:11:51.628388  227009 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 09:11:51.665067  227009 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 09:11:51.700251  227009 kubeadm.go:400] StartCluster: {Name:pause-182020 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-182020 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry
-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:11:51.700413  227009 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:11:51.700466  227009 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:11:51.729888  227009 cri.go:89] found id: "f5ca822f709a1b9798c695190a4248e26ef175c5318934a88c07b7b1904bff04"
	I1018 09:11:51.729910  227009 cri.go:89] found id: "d7a65a8f4fe938ccc6475ccee4fbc8b3e900a2ff7eddc8b7274ea535e3bfcea8"
	I1018 09:11:51.729914  227009 cri.go:89] found id: "eaf98db334b9006e58ac3d2f70ad88ace993203f90b34acea96e2f6ddfbeaec8"
	I1018 09:11:51.729917  227009 cri.go:89] found id: "8807016bdc6e7d79c7d0284d142aa731ef1ac4dc315dba67d96ef49d194d63d2"
	I1018 09:11:51.729921  227009 cri.go:89] found id: "04e5f45aeb8859409b77f6e8ae85fde70680a8b11a8d7d93b61866eee5c7d370"
	I1018 09:11:51.729924  227009 cri.go:89] found id: "12a42de0cfa9baf1a98897014767f47b63a1c6ae213ffaf923be29be984231c3"
	I1018 09:11:51.729926  227009 cri.go:89] found id: "d92c7b180dca0476851d7f6127a563b17c31e334cb49a9c36a31c2355cb3eeae"
	I1018 09:11:51.729929  227009 cri.go:89] found id: ""
	I1018 09:11:51.729969  227009 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 09:11:51.741870  227009 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:11:51Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:11:51.741992  227009 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:11:51.750734  227009 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 09:11:51.750759  227009 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 09:11:51.750800  227009 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 09:11:51.758868  227009 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 09:11:51.759648  227009 kubeconfig.go:125] found "pause-182020" server: "https://192.168.103.2:8443"
	I1018 09:11:51.760477  227009 kapi.go:59] client config for pause-182020: &rest.Config{Host:"https://192.168.103.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21767-5897/.minikube/profiles/pause-182020/client.crt", KeyFile:"/home/jenkins/minikube-integration/21767-5897/.minikube/profiles/pause-182020/client.key", CAFile:"/home/jenkins/minikube-integration/21767-5897/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string
(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1018 09:11:51.760880  227009 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1018 09:11:51.760893  227009 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1018 09:11:51.760898  227009 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1018 09:11:51.760902  227009 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1018 09:11:51.760905  227009 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1018 09:11:51.761186  227009 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 09:11:51.769619  227009 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.103.2
	I1018 09:11:51.769650  227009 kubeadm.go:601] duration metric: took 18.885507ms to restartPrimaryControlPlane
	I1018 09:11:51.769661  227009 kubeadm.go:402] duration metric: took 69.420174ms to StartCluster
	I1018 09:11:51.769681  227009 settings.go:142] acquiring lock: {Name:mk177870d6cf7000f95346d8b9c104ade730278a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:11:51.769756  227009 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-5897/kubeconfig
	I1018 09:11:51.770738  227009 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/kubeconfig: {Name:mkbdf22e0f6c3f9f36e4cde3352b620d43ace448 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:11:51.770957  227009 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:11:51.771023  227009 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 09:11:51.771143  227009 config.go:182] Loaded profile config "pause-182020": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:11:51.774122  227009 out.go:179] * Verifying Kubernetes components...
	I1018 09:11:51.774140  227009 out.go:179] * Enabled addons: 
	I1018 09:11:51.775400  227009 addons.go:514] duration metric: took 4.384353ms for enable addons: enabled=[]
	I1018 09:11:51.775437  227009 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:11:51.888528  227009 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:11:51.903163  227009 node_ready.go:35] waiting up to 6m0s for node "pause-182020" to be "Ready" ...
	I1018 09:11:51.911196  227009 node_ready.go:49] node "pause-182020" is "Ready"
	I1018 09:11:51.911229  227009 node_ready.go:38] duration metric: took 8.024706ms for node "pause-182020" to be "Ready" ...
	I1018 09:11:51.911246  227009 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:11:51.911298  227009 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:11:51.923793  227009 api_server.go:72] duration metric: took 152.807283ms to wait for apiserver process to appear ...
	I1018 09:11:51.923818  227009 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:11:51.923839  227009 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1018 09:11:51.929415  227009 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1018 09:11:51.930444  227009 api_server.go:141] control plane version: v1.34.1
	I1018 09:11:51.930476  227009 api_server.go:131] duration metric: took 6.650412ms to wait for apiserver health ...
	I1018 09:11:51.930487  227009 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:11:51.934198  227009 system_pods.go:59] 7 kube-system pods found
	I1018 09:11:51.934235  227009 system_pods.go:61] "coredns-66bc5c9577-s4g4q" [2a6a8dd2-6620-4f70-8754-7205c4c93f06] Running
	I1018 09:11:51.934241  227009 system_pods.go:61] "etcd-pause-182020" [e0d74bbe-9879-4ae2-88d8-5c9bcba491ab] Running
	I1018 09:11:51.934245  227009 system_pods.go:61] "kindnet-kbtnf" [89b40206-c23b-47d5-9f2c-e653f39823f8] Running
	I1018 09:11:51.934249  227009 system_pods.go:61] "kube-apiserver-pause-182020" [0b8729c9-cc0a-46ee-ac7a-5a389356ab45] Running
	I1018 09:11:51.934252  227009 system_pods.go:61] "kube-controller-manager-pause-182020" [6892a1ce-d835-4e60-a76b-06a6341903dc] Running
	I1018 09:11:51.934256  227009 system_pods.go:61] "kube-proxy-zlxhp" [3cc0ae2c-2ad7-4ec3-9f13-9b19c44124bc] Running
	I1018 09:11:51.934259  227009 system_pods.go:61] "kube-scheduler-pause-182020" [a2ebf54c-64d2-4247-ac81-3f15ff9a32e8] Running
	I1018 09:11:51.934265  227009 system_pods.go:74] duration metric: took 3.77155ms to wait for pod list to return data ...
	I1018 09:11:51.934275  227009 default_sa.go:34] waiting for default service account to be created ...
	I1018 09:11:51.936329  227009 default_sa.go:45] found service account: "default"
	I1018 09:11:51.936377  227009 default_sa.go:55] duration metric: took 2.094236ms for default service account to be created ...
	I1018 09:11:51.936388  227009 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 09:11:51.939074  227009 system_pods.go:86] 7 kube-system pods found
	I1018 09:11:51.939103  227009 system_pods.go:89] "coredns-66bc5c9577-s4g4q" [2a6a8dd2-6620-4f70-8754-7205c4c93f06] Running
	I1018 09:11:51.939112  227009 system_pods.go:89] "etcd-pause-182020" [e0d74bbe-9879-4ae2-88d8-5c9bcba491ab] Running
	I1018 09:11:51.939118  227009 system_pods.go:89] "kindnet-kbtnf" [89b40206-c23b-47d5-9f2c-e653f39823f8] Running
	I1018 09:11:51.939123  227009 system_pods.go:89] "kube-apiserver-pause-182020" [0b8729c9-cc0a-46ee-ac7a-5a389356ab45] Running
	I1018 09:11:51.939129  227009 system_pods.go:89] "kube-controller-manager-pause-182020" [6892a1ce-d835-4e60-a76b-06a6341903dc] Running
	I1018 09:11:51.939138  227009 system_pods.go:89] "kube-proxy-zlxhp" [3cc0ae2c-2ad7-4ec3-9f13-9b19c44124bc] Running
	I1018 09:11:51.939143  227009 system_pods.go:89] "kube-scheduler-pause-182020" [a2ebf54c-64d2-4247-ac81-3f15ff9a32e8] Running
	I1018 09:11:51.939156  227009 system_pods.go:126] duration metric: took 2.760776ms to wait for k8s-apps to be running ...
	I1018 09:11:51.939172  227009 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 09:11:51.939217  227009 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:11:51.952831  227009 system_svc.go:56] duration metric: took 13.651712ms WaitForService to wait for kubelet
	I1018 09:11:51.952860  227009 kubeadm.go:586] duration metric: took 181.878065ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:11:51.952881  227009 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:11:51.955723  227009 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 09:11:51.955751  227009 node_conditions.go:123] node cpu capacity is 8
	I1018 09:11:51.955766  227009 node_conditions.go:105] duration metric: took 2.879443ms to run NodePressure ...
	I1018 09:11:51.955779  227009 start.go:241] waiting for startup goroutines ...
	I1018 09:11:51.955789  227009 start.go:246] waiting for cluster config update ...
	I1018 09:11:51.955801  227009 start.go:255] writing updated cluster config ...
	I1018 09:11:51.956132  227009 ssh_runner.go:195] Run: rm -f paused
	I1018 09:11:51.960241  227009 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:11:51.960892  227009 kapi.go:59] client config for pause-182020: &rest.Config{Host:"https://192.168.103.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21767-5897/.minikube/profiles/pause-182020/client.crt", KeyFile:"/home/jenkins/minikube-integration/21767-5897/.minikube/profiles/pause-182020/client.key", CAFile:"/home/jenkins/minikube-integration/21767-5897/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string
(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1018 09:11:51.963601  227009 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-s4g4q" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:11:51.967801  227009 pod_ready.go:94] pod "coredns-66bc5c9577-s4g4q" is "Ready"
	I1018 09:11:51.967821  227009 pod_ready.go:86] duration metric: took 4.201245ms for pod "coredns-66bc5c9577-s4g4q" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:11:51.970097  227009 pod_ready.go:83] waiting for pod "etcd-pause-182020" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:11:51.974381  227009 pod_ready.go:94] pod "etcd-pause-182020" is "Ready"
	I1018 09:11:51.974433  227009 pod_ready.go:86] duration metric: took 4.286762ms for pod "etcd-pause-182020" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:11:51.976488  227009 pod_ready.go:83] waiting for pod "kube-apiserver-pause-182020" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:11:51.980284  227009 pod_ready.go:94] pod "kube-apiserver-pause-182020" is "Ready"
	I1018 09:11:51.980305  227009 pod_ready.go:86] duration metric: took 3.795009ms for pod "kube-apiserver-pause-182020" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:11:51.982339  227009 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-182020" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:11:52.364503  227009 pod_ready.go:94] pod "kube-controller-manager-pause-182020" is "Ready"
	I1018 09:11:52.364533  227009 pod_ready.go:86] duration metric: took 382.160273ms for pod "kube-controller-manager-pause-182020" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:11:52.564713  227009 pod_ready.go:83] waiting for pod "kube-proxy-zlxhp" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:11:52.963955  227009 pod_ready.go:94] pod "kube-proxy-zlxhp" is "Ready"
	I1018 09:11:52.963980  227009 pod_ready.go:86] duration metric: took 399.239645ms for pod "kube-proxy-zlxhp" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:11:53.164227  227009 pod_ready.go:83] waiting for pod "kube-scheduler-pause-182020" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:11:53.564891  227009 pod_ready.go:94] pod "kube-scheduler-pause-182020" is "Ready"
	I1018 09:11:53.564918  227009 pod_ready.go:86] duration metric: took 400.663805ms for pod "kube-scheduler-pause-182020" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:11:53.564933  227009 pod_ready.go:40] duration metric: took 1.604659511s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:11:53.614948  227009 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 09:11:53.617162  227009 out.go:179] * Done! kubectl is now configured to use "pause-182020" cluster and "default" namespace by default
	I1018 09:11:49.084715  227604 out.go:252] * Restarting existing docker container for "NoKubernetes-548249" ...
	I1018 09:11:49.084781  227604 cli_runner.go:164] Run: docker start NoKubernetes-548249
	I1018 09:11:49.356199  227604 cli_runner.go:164] Run: docker container inspect NoKubernetes-548249 --format={{.State.Status}}
	I1018 09:11:49.377687  227604 kic.go:430] container "NoKubernetes-548249" state is running.
	I1018 09:11:49.378055  227604 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-548249
	I1018 09:11:49.398025  227604 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/NoKubernetes-548249/config.json ...
	I1018 09:11:49.398229  227604 machine.go:93] provisionDockerMachine start ...
	I1018 09:11:49.398285  227604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-548249
	I1018 09:11:49.419802  227604 main.go:141] libmachine: Using SSH client type: native
	I1018 09:11:49.420095  227604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33048 <nil> <nil>}
	I1018 09:11:49.420101  227604 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:11:49.420843  227604 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35120->127.0.0.1:33048: read: connection reset by peer
	I1018 09:11:52.557475  227604 main.go:141] libmachine: SSH cmd err, output: <nil>: NoKubernetes-548249
	
	I1018 09:11:52.557492  227604 ubuntu.go:182] provisioning hostname "NoKubernetes-548249"
	I1018 09:11:52.557552  227604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-548249
	I1018 09:11:52.577102  227604 main.go:141] libmachine: Using SSH client type: native
	I1018 09:11:52.577306  227604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33048 <nil> <nil>}
	I1018 09:11:52.577312  227604 main.go:141] libmachine: About to run SSH command:
	sudo hostname NoKubernetes-548249 && echo "NoKubernetes-548249" | sudo tee /etc/hostname
	I1018 09:11:52.724080  227604 main.go:141] libmachine: SSH cmd err, output: <nil>: NoKubernetes-548249
	
	I1018 09:11:52.724140  227604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-548249
	I1018 09:11:52.744059  227604 main.go:141] libmachine: Using SSH client type: native
	I1018 09:11:52.744270  227604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33048 <nil> <nil>}
	I1018 09:11:52.744281  227604 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sNoKubernetes-548249' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 NoKubernetes-548249/g' /etc/hosts;
				else 
					echo '127.0.1.1 NoKubernetes-548249' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:11:52.881686  227604 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:11:52.881712  227604 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-5897/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-5897/.minikube}
	I1018 09:11:52.881732  227604 ubuntu.go:190] setting up certificates
	I1018 09:11:52.881761  227604 provision.go:84] configureAuth start
	I1018 09:11:52.881815  227604 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-548249
	I1018 09:11:52.901482  227604 provision.go:143] copyHostCerts
	I1018 09:11:52.901536  227604 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem, removing ...
	I1018 09:11:52.901547  227604 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem
	I1018 09:11:52.901616  227604 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem (1078 bytes)
	I1018 09:11:52.901715  227604 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem, removing ...
	I1018 09:11:52.901718  227604 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem
	I1018 09:11:52.901741  227604 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem (1123 bytes)
	I1018 09:11:52.901812  227604 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem, removing ...
	I1018 09:11:52.901814  227604 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem
	I1018 09:11:52.901836  227604 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem (1675 bytes)
	I1018 09:11:52.901902  227604 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem org=jenkins.NoKubernetes-548249 san=[127.0.0.1 192.168.94.2 NoKubernetes-548249 localhost minikube]
	I1018 09:11:53.360978  227604 provision.go:177] copyRemoteCerts
	I1018 09:11:53.361029  227604 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:11:53.361062  227604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-548249
	I1018 09:11:53.381246  227604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/NoKubernetes-548249/id_rsa Username:docker}
	I1018 09:11:53.481786  227604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:11:53.500405  227604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1018 09:11:53.519843  227604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 09:11:53.538876  227604 provision.go:87] duration metric: took 657.102341ms to configureAuth
	I1018 09:11:53.538896  227604 ubuntu.go:206] setting minikube options for container-runtime
	I1018 09:11:53.539054  227604 config.go:182] Loaded profile config "NoKubernetes-548249": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1018 09:11:53.539142  227604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-548249
	I1018 09:11:53.559223  227604 main.go:141] libmachine: Using SSH client type: native
	I1018 09:11:53.559505  227604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33048 <nil> <nil>}
	I1018 09:11:53.559521  227604 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:11:53.816840  227604 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:11:53.816854  227604 machine.go:96] duration metric: took 4.418619285s to provisionDockerMachine
	I1018 09:11:53.816864  227604 start.go:293] postStartSetup for "NoKubernetes-548249" (driver="docker")
	I1018 09:11:53.816872  227604 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:11:53.816919  227604 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:11:53.816948  227604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-548249
	I1018 09:11:53.837299  227604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/NoKubernetes-548249/id_rsa Username:docker}
	I1018 09:11:53.936298  227604 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:11:53.940355  227604 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 09:11:53.940380  227604 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 09:11:53.940393  227604 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-5897/.minikube/addons for local assets ...
	I1018 09:11:53.940445  227604 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-5897/.minikube/files for local assets ...
	I1018 09:11:53.940527  227604 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem -> 93942.pem in /etc/ssl/certs
	I1018 09:11:53.940614  227604 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:11:53.953527  227604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem --> /etc/ssl/certs/93942.pem (1708 bytes)
	I1018 09:11:53.975123  227604 start.go:296] duration metric: took 158.244807ms for postStartSetup
	I1018 09:11:53.975216  227604 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:11:53.975246  227604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-548249
	I1018 09:11:53.994884  227604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/NoKubernetes-548249/id_rsa Username:docker}
	I1018 09:11:54.091538  227604 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 09:11:54.096575  227604 fix.go:56] duration metric: took 5.03382396s for fixHost
	I1018 09:11:54.096594  227604 start.go:83] releasing machines lock for "NoKubernetes-548249", held for 5.033862437s
	I1018 09:11:54.096670  227604 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-548249
	I1018 09:11:54.119952  227604 ssh_runner.go:195] Run: cat /version.json
	I1018 09:11:54.120005  227604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-548249
	I1018 09:11:54.120021  227604 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:11:54.120089  227604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-548249
	I1018 09:11:54.142736  227604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/NoKubernetes-548249/id_rsa Username:docker}
	I1018 09:11:54.143078  227604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/NoKubernetes-548249/id_rsa Username:docker}
	I1018 09:11:54.301849  227604 ssh_runner.go:195] Run: systemctl --version
	I1018 09:11:54.309557  227604 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:11:54.346026  227604 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:11:54.351327  227604 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:11:54.351420  227604 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:11:54.360409  227604 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 09:11:54.360439  227604 start.go:495] detecting cgroup driver to use...
	I1018 09:11:54.360478  227604 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 09:11:54.360532  227604 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:11:54.379631  227604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:11:54.395393  227604 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:11:54.395445  227604 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:11:54.416443  227604 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:11:54.431783  227604 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:11:54.541556  227604 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:11:54.642629  227604 docker.go:234] disabling docker service ...
	I1018 09:11:54.642679  227604 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:11:54.659169  227604 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:11:54.672725  227604 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:11:54.756489  227604 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:11:54.839939  227604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:11:54.853258  227604 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:11:54.868618  227604 download.go:108] Downloading: https://dl.k8s.io/release/v0.0.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v0.0.0/bin/linux/amd64/kubeadm.sha1 -> /home/jenkins/minikube-integration/21767-5897/.minikube/cache/linux/amd64/v0.0.0/kubeadm
	I1018 09:11:55.062644  227604 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1018 09:11:55.062720  227604 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:11:55.077216  227604 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 09:11:55.077267  227604 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:11:55.087707  227604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:11:55.097254  227604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:11:55.106960  227604 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:11:55.115877  227604 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:11:55.123991  227604 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:11:55.132150  227604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:11:55.220398  227604 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:11:55.337632  227604 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:11:55.337706  227604 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:11:55.342490  227604 start.go:563] Will wait 60s for crictl version
	I1018 09:11:55.342556  227604 ssh_runner.go:195] Run: which crictl
	I1018 09:11:55.346684  227604 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 09:11:55.373430  227604 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 09:11:55.373520  227604 ssh_runner.go:195] Run: crio --version
	I1018 09:11:55.405816  227604 ssh_runner.go:195] Run: crio --version
	I1018 09:11:55.438196  227604 out.go:179] * Preparing CRI-O 1.34.1 ...
	I1018 09:11:55.439797  227604 ssh_runner.go:195] Run: rm -f paused
	I1018 09:11:55.445624  227604 out.go:179] * Done! minikube is ready without Kubernetes!
	I1018 09:11:55.449224  227604 out.go:203] ╭───────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                       │
	│                        * Things to try without Kubernetes ...                         │
	│                                                                                       │
	│    - "minikube ssh" to SSH into minikube's node.                                      │
	│    - "minikube podman-env" to point your podman-cli to the podman inside minikube.    │
	│    - "minikube image" to build images without docker.                                 │
	│                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────╯
	I1018 09:11:50.921956  189686 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:11:50.922565  189686 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:11:50.922628  189686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 09:11:50.922688  189686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 09:11:50.956372  189686 cri.go:89] found id: "7027c49f3b8f3e631a42588b854b959dc67b4015fbdcf089da949a1930646b4b"
	I1018 09:11:50.956395  189686 cri.go:89] found id: ""
	I1018 09:11:50.956404  189686 logs.go:282] 1 containers: [7027c49f3b8f3e631a42588b854b959dc67b4015fbdcf089da949a1930646b4b]
	I1018 09:11:50.956463  189686 ssh_runner.go:195] Run: which crictl
	I1018 09:11:50.960453  189686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 09:11:50.960537  189686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 09:11:50.991133  189686 cri.go:89] found id: ""
	I1018 09:11:50.991155  189686 logs.go:282] 0 containers: []
	W1018 09:11:50.991162  189686 logs.go:284] No container was found matching "etcd"
	I1018 09:11:50.991168  189686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 09:11:50.991213  189686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 09:11:51.020511  189686 cri.go:89] found id: ""
	I1018 09:11:51.020543  189686 logs.go:282] 0 containers: []
	W1018 09:11:51.020551  189686 logs.go:284] No container was found matching "coredns"
	I1018 09:11:51.020560  189686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 09:11:51.020627  189686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 09:11:51.051163  189686 cri.go:89] found id: "86f01e214f69dadfddb8202ec7a69d76aaed7800772fd9291732c69639a6ff2e"
	I1018 09:11:51.051193  189686 cri.go:89] found id: ""
	I1018 09:11:51.051203  189686 logs.go:282] 1 containers: [86f01e214f69dadfddb8202ec7a69d76aaed7800772fd9291732c69639a6ff2e]
	I1018 09:11:51.051264  189686 ssh_runner.go:195] Run: which crictl
	I1018 09:11:51.055119  189686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 09:11:51.055195  189686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 09:11:51.083991  189686 cri.go:89] found id: ""
	I1018 09:11:51.084023  189686 logs.go:282] 0 containers: []
	W1018 09:11:51.084032  189686 logs.go:284] No container was found matching "kube-proxy"
	I1018 09:11:51.084038  189686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 09:11:51.084097  189686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 09:11:51.112498  189686 cri.go:89] found id: "20ce98cee7a65c567d8e5cab3d913b60c862afd25295d419c93e4cdab6c05312"
	I1018 09:11:51.112529  189686 cri.go:89] found id: ""
	I1018 09:11:51.112540  189686 logs.go:282] 1 containers: [20ce98cee7a65c567d8e5cab3d913b60c862afd25295d419c93e4cdab6c05312]
	I1018 09:11:51.112599  189686 ssh_runner.go:195] Run: which crictl
	I1018 09:11:51.116672  189686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 09:11:51.116737  189686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 09:11:51.146899  189686 cri.go:89] found id: ""
	I1018 09:11:51.146923  189686 logs.go:282] 0 containers: []
	W1018 09:11:51.146933  189686 logs.go:284] No container was found matching "kindnet"
	I1018 09:11:51.146940  189686 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 09:11:51.146997  189686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 09:11:51.175436  189686 cri.go:89] found id: ""
	I1018 09:11:51.175463  189686 logs.go:282] 0 containers: []
	W1018 09:11:51.175474  189686 logs.go:284] No container was found matching "storage-provisioner"
	I1018 09:11:51.175484  189686 logs.go:123] Gathering logs for kube-scheduler [86f01e214f69dadfddb8202ec7a69d76aaed7800772fd9291732c69639a6ff2e] ...
	I1018 09:11:51.175499  189686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 86f01e214f69dadfddb8202ec7a69d76aaed7800772fd9291732c69639a6ff2e"
	I1018 09:11:51.224426  189686 logs.go:123] Gathering logs for kube-controller-manager [20ce98cee7a65c567d8e5cab3d913b60c862afd25295d419c93e4cdab6c05312] ...
	I1018 09:11:51.224469  189686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 20ce98cee7a65c567d8e5cab3d913b60c862afd25295d419c93e4cdab6c05312"
	I1018 09:11:51.255199  189686 logs.go:123] Gathering logs for CRI-O ...
	I1018 09:11:51.255231  189686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 09:11:51.305975  189686 logs.go:123] Gathering logs for container status ...
	I1018 09:11:51.306011  189686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 09:11:51.339266  189686 logs.go:123] Gathering logs for kubelet ...
	I1018 09:11:51.339302  189686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 09:11:51.427620  189686 logs.go:123] Gathering logs for dmesg ...
	I1018 09:11:51.427648  189686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 09:11:51.442267  189686 logs.go:123] Gathering logs for describe nodes ...
	I1018 09:11:51.442299  189686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 09:11:51.503030  189686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 09:11:51.503060  189686 logs.go:123] Gathering logs for kube-apiserver [7027c49f3b8f3e631a42588b854b959dc67b4015fbdcf089da949a1930646b4b] ...
	I1018 09:11:51.503072  189686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7027c49f3b8f3e631a42588b854b959dc67b4015fbdcf089da949a1930646b4b"
	I1018 09:11:54.037420  189686 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:11:54.037817  189686 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:11:54.037861  189686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 09:11:54.037942  189686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 09:11:54.067757  189686 cri.go:89] found id: "7027c49f3b8f3e631a42588b854b959dc67b4015fbdcf089da949a1930646b4b"
	I1018 09:11:54.067780  189686 cri.go:89] found id: ""
	I1018 09:11:54.067788  189686 logs.go:282] 1 containers: [7027c49f3b8f3e631a42588b854b959dc67b4015fbdcf089da949a1930646b4b]
	I1018 09:11:54.067840  189686 ssh_runner.go:195] Run: which crictl
	I1018 09:11:54.071673  189686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 09:11:54.071732  189686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 09:11:54.102955  189686 cri.go:89] found id: ""
	I1018 09:11:54.102982  189686 logs.go:282] 0 containers: []
	W1018 09:11:54.102993  189686 logs.go:284] No container was found matching "etcd"
	I1018 09:11:54.103001  189686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 09:11:54.103059  189686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 09:11:54.136134  189686 cri.go:89] found id: ""
	I1018 09:11:54.136161  189686 logs.go:282] 0 containers: []
	W1018 09:11:54.136171  189686 logs.go:284] No container was found matching "coredns"
	I1018 09:11:54.136178  189686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 09:11:54.136334  189686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 09:11:54.168580  189686 cri.go:89] found id: "86f01e214f69dadfddb8202ec7a69d76aaed7800772fd9291732c69639a6ff2e"
	I1018 09:11:54.168601  189686 cri.go:89] found id: ""
	I1018 09:11:54.168610  189686 logs.go:282] 1 containers: [86f01e214f69dadfddb8202ec7a69d76aaed7800772fd9291732c69639a6ff2e]
	I1018 09:11:54.168666  189686 ssh_runner.go:195] Run: which crictl
	I1018 09:11:54.172964  189686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 09:11:54.173030  189686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 09:11:54.202369  189686 cri.go:89] found id: ""
	I1018 09:11:54.202394  189686 logs.go:282] 0 containers: []
	W1018 09:11:54.202403  189686 logs.go:284] No container was found matching "kube-proxy"
	I1018 09:11:54.202416  189686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 09:11:54.202466  189686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 09:11:54.229539  189686 cri.go:89] found id: "20ce98cee7a65c567d8e5cab3d913b60c862afd25295d419c93e4cdab6c05312"
	I1018 09:11:54.229564  189686 cri.go:89] found id: ""
	I1018 09:11:54.229574  189686 logs.go:282] 1 containers: [20ce98cee7a65c567d8e5cab3d913b60c862afd25295d419c93e4cdab6c05312]
	I1018 09:11:54.229624  189686 ssh_runner.go:195] Run: which crictl
	I1018 09:11:54.234650  189686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 09:11:54.234719  189686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 09:11:54.263596  189686 cri.go:89] found id: ""
	I1018 09:11:54.263626  189686 logs.go:282] 0 containers: []
	W1018 09:11:54.263636  189686 logs.go:284] No container was found matching "kindnet"
	I1018 09:11:54.263644  189686 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 09:11:54.263705  189686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 09:11:54.292652  189686 cri.go:89] found id: ""
	I1018 09:11:54.292680  189686 logs.go:282] 0 containers: []
	W1018 09:11:54.292688  189686 logs.go:284] No container was found matching "storage-provisioner"
	I1018 09:11:54.292697  189686 logs.go:123] Gathering logs for dmesg ...
	I1018 09:11:54.292709  189686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 09:11:54.308036  189686 logs.go:123] Gathering logs for describe nodes ...
	I1018 09:11:54.308068  189686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 09:11:54.373766  189686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 09:11:54.373793  189686 logs.go:123] Gathering logs for kube-apiserver [7027c49f3b8f3e631a42588b854b959dc67b4015fbdcf089da949a1930646b4b] ...
	I1018 09:11:54.373813  189686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7027c49f3b8f3e631a42588b854b959dc67b4015fbdcf089da949a1930646b4b"
	I1018 09:11:54.415826  189686 logs.go:123] Gathering logs for kube-scheduler [86f01e214f69dadfddb8202ec7a69d76aaed7800772fd9291732c69639a6ff2e] ...
	I1018 09:11:54.415869  189686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 86f01e214f69dadfddb8202ec7a69d76aaed7800772fd9291732c69639a6ff2e"
	I1018 09:11:54.476788  189686 logs.go:123] Gathering logs for kube-controller-manager [20ce98cee7a65c567d8e5cab3d913b60c862afd25295d419c93e4cdab6c05312] ...
	I1018 09:11:54.476827  189686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 20ce98cee7a65c567d8e5cab3d913b60c862afd25295d419c93e4cdab6c05312"
	I1018 09:11:54.504731  189686 logs.go:123] Gathering logs for CRI-O ...
	I1018 09:11:54.504759  189686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 09:11:54.558946  189686 logs.go:123] Gathering logs for container status ...
	I1018 09:11:54.558983  189686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 09:11:54.601748  189686 logs.go:123] Gathering logs for kubelet ...
	I1018 09:11:54.601777  189686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	
	
	==> CRI-O <==
	Oct 18 09:11:50 pause-182020 crio[2219]: time="2025-10-18T09:11:50.55821105Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 18 09:11:50 pause-182020 crio[2219]: time="2025-10-18T09:11:50.559041721Z" level=info msg="Conmon does support the --sync option"
	Oct 18 09:11:50 pause-182020 crio[2219]: time="2025-10-18T09:11:50.559059641Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 18 09:11:50 pause-182020 crio[2219]: time="2025-10-18T09:11:50.559076277Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 18 09:11:50 pause-182020 crio[2219]: time="2025-10-18T09:11:50.55986892Z" level=info msg="Conmon does support the --sync option"
	Oct 18 09:11:50 pause-182020 crio[2219]: time="2025-10-18T09:11:50.559886212Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 18 09:11:50 pause-182020 crio[2219]: time="2025-10-18T09:11:50.563690164Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 09:11:50 pause-182020 crio[2219]: time="2025-10-18T09:11:50.563722403Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 09:11:50 pause-182020 crio[2219]: time="2025-10-18T09:11:50.564310891Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = true\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/c
ni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"/
var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Oct 18 09:11:50 pause-182020 crio[2219]: time="2025-10-18T09:11:50.564709336Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Oct 18 09:11:50 pause-182020 crio[2219]: time="2025-10-18T09:11:50.564779386Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Oct 18 09:11:50 pause-182020 crio[2219]: time="2025-10-18T09:11:50.570578614Z" level=info msg="No kernel support for IPv6: could not find nftables binary: exec: \"nft\": executable file not found in $PATH"
	Oct 18 09:11:50 pause-182020 crio[2219]: time="2025-10-18T09:11:50.610937783Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-s4g4q Namespace:kube-system ID:9ff6d7c752170d3d3c82b2a67b922755e3af1a0d48eeb2dbb014ba305c5a84bb UID:2a6a8dd2-6620-4f70-8754-7205c4c93f06 NetNS:/var/run/netns/32668cb7-e2a1-47f6-aaed-cffc9df42a2e Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0001280f8}] Aliases:map[]}"
	Oct 18 09:11:50 pause-182020 crio[2219]: time="2025-10-18T09:11:50.61111723Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-s4g4q for CNI network kindnet (type=ptp)"
	Oct 18 09:11:50 pause-182020 crio[2219]: time="2025-10-18T09:11:50.611574313Z" level=info msg="Registered SIGHUP reload watcher"
	Oct 18 09:11:50 pause-182020 crio[2219]: time="2025-10-18T09:11:50.611604121Z" level=info msg="Starting seccomp notifier watcher"
	Oct 18 09:11:50 pause-182020 crio[2219]: time="2025-10-18T09:11:50.611647108Z" level=info msg="Create NRI interface"
	Oct 18 09:11:50 pause-182020 crio[2219]: time="2025-10-18T09:11:50.611720926Z" level=info msg="built-in NRI default validator is disabled"
	Oct 18 09:11:50 pause-182020 crio[2219]: time="2025-10-18T09:11:50.611726898Z" level=info msg="runtime interface created"
	Oct 18 09:11:50 pause-182020 crio[2219]: time="2025-10-18T09:11:50.611737143Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Oct 18 09:11:50 pause-182020 crio[2219]: time="2025-10-18T09:11:50.611742903Z" level=info msg="runtime interface starting up..."
	Oct 18 09:11:50 pause-182020 crio[2219]: time="2025-10-18T09:11:50.611748323Z" level=info msg="starting plugins..."
	Oct 18 09:11:50 pause-182020 crio[2219]: time="2025-10-18T09:11:50.611758564Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Oct 18 09:11:50 pause-182020 crio[2219]: time="2025-10-18T09:11:50.612050519Z" level=info msg="No systemd watchdog enabled"
	Oct 18 09:11:50 pause-182020 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	f5ca822f709a1       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   10 seconds ago       Running             coredns                   0                   9ff6d7c752170       coredns-66bc5c9577-s4g4q               kube-system
	d7a65a8f4fe93       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   51 seconds ago       Running             kube-proxy                0                   8ef1dc84e06f9       kube-proxy-zlxhp                       kube-system
	eaf98db334b90       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   51 seconds ago       Running             kindnet-cni               0                   879525b712465       kindnet-kbtnf                          kube-system
	8807016bdc6e7       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   About a minute ago   Running             kube-controller-manager   0                   005bb2ee16637       kube-controller-manager-pause-182020   kube-system
	04e5f45aeb885       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   About a minute ago   Running             kube-scheduler            0                   8c8cc334cacc0       kube-scheduler-pause-182020            kube-system
	12a42de0cfa9b       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   About a minute ago   Running             kube-apiserver            0                   777b8eebef534       kube-apiserver-pause-182020            kube-system
	d92c7b180dca0       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   About a minute ago   Running             etcd                      0                   07d56c9ba7ddd       etcd-pause-182020                      kube-system
	
	
	==> coredns [f5ca822f709a1b9798c695190a4248e26ef175c5318934a88c07b7b1904bff04] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42798 - 49055 "HINFO IN 7574793521949092101.1276386647293361466. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.091090041s
	
	
	==> describe nodes <==
	Name:               pause-182020
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-182020
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a39cecdc22b5fb611b15c7501c7459c3b4d2820
	                    minikube.k8s.io/name=pause-182020
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T09_10_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 09:10:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-182020
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 09:11:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 09:11:45 +0000   Sat, 18 Oct 2025 09:10:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 09:11:45 +0000   Sat, 18 Oct 2025 09:10:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 09:11:45 +0000   Sat, 18 Oct 2025 09:10:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 09:11:45 +0000   Sat, 18 Oct 2025 09:11:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    pause-182020
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                d958dd9e-0008-4313-9519-339af1cb971b
	  Boot ID:                    e8d7ef1f-87bb-488c-8381-e18fe85b484f
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-s4g4q                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     52s
	  kube-system                 etcd-pause-182020                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         58s
	  kube-system                 kindnet-kbtnf                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      52s
	  kube-system                 kube-apiserver-pause-182020             250m (3%)     0 (0%)      0 (0%)           0 (0%)         58s
	  kube-system                 kube-controller-manager-pause-182020    200m (2%)     0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-proxy-zlxhp                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kube-system                 kube-scheduler-pause-182020             100m (1%)     0 (0%)      0 (0%)           0 (0%)         58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 51s   kube-proxy       
	  Normal  Starting                 58s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s   kubelet          Node pause-182020 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s   kubelet          Node pause-182020 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s   kubelet          Node pause-182020 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           54s   node-controller  Node pause-182020 event: Registered Node pause-182020 in Controller
	  Normal  NodeReady                11s   kubelet          Node pause-182020 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.101295] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028366] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.196963] kauditd_printk_skb: 47 callbacks suppressed
	[Oct18 08:32] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 0e c2 2a eb 06 e6 66 0c bb bb 22 50 08 00
	[  +1.012248] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 0e c2 2a eb 06 e6 66 0c bb bb 22 50 08 00
	[  +1.023893] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 0e c2 2a eb 06 e6 66 0c bb bb 22 50 08 00
	[  +1.023849] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 0e c2 2a eb 06 e6 66 0c bb bb 22 50 08 00
	[  +1.023875] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 0e c2 2a eb 06 e6 66 0c bb bb 22 50 08 00
	[  +1.024040] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 0e c2 2a eb 06 e6 66 0c bb bb 22 50 08 00
	[  +2.047589] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 0e c2 2a eb 06 e6 66 0c bb bb 22 50 08 00
	[  +4.031586] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 0e c2 2a eb 06 e6 66 0c bb bb 22 50 08 00
	[  +8.255150] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 0e c2 2a eb 06 e6 66 0c bb bb 22 50 08 00
	[ +16.382250] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000028] ll header: 00000000: 0e c2 2a eb 06 e6 66 0c bb bb 22 50 08 00
	[Oct18 08:33] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 0e c2 2a eb 06 e6 66 0c bb bb 22 50 08 00
	
	
	==> etcd [d92c7b180dca0476851d7f6127a563b17c31e334cb49a9c36a31c2355cb3eeae] <==
	{"level":"warn","ts":"2025-10-18T09:10:55.443861Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:10:55.494118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:11:04.602368Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"146.268593ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/coredns\" limit:1 ","response":"range_response_count:1 size:693"}
	{"level":"info","ts":"2025-10-18T09:11:04.602474Z","caller":"traceutil/trace.go:172","msg":"trace[1960749243] range","detail":"{range_begin:/registry/configmaps/kube-system/coredns; range_end:; response_count:1; response_revision:350; }","duration":"146.438968ms","start":"2025-10-18T09:11:04.456017Z","end":"2025-10-18T09:11:04.602456Z","steps":["trace[1960749243] 'range keys from in-memory index tree'  (duration: 146.11788ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:11:04.824107Z","caller":"traceutil/trace.go:172","msg":"trace[871428486] transaction","detail":"{read_only:false; response_revision:353; number_of_response:1; }","duration":"150.696184ms","start":"2025-10-18T09:11:04.673393Z","end":"2025-10-18T09:11:04.824089Z","steps":["trace[871428486] 'process raft request'  (duration: 150.590199ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:11:04.969504Z","caller":"traceutil/trace.go:172","msg":"trace[862003141] transaction","detail":"{read_only:false; response_revision:356; number_of_response:1; }","duration":"124.652736ms","start":"2025-10-18T09:11:04.844831Z","end":"2025-10-18T09:11:04.969483Z","steps":["trace[862003141] 'process raft request'  (duration: 124.462649ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T09:11:05.237930Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"130.454721ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873789411450250622 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/coredns-66bc5c9577-qfkjs\" mod_revision:344 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-66bc5c9577-qfkjs\" value_size:4295 >> failure:<request_range:<key:\"/registry/pods/kube-system/coredns-66bc5c9577-qfkjs\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-18T09:11:05.238104Z","caller":"traceutil/trace.go:172","msg":"trace[1918107826] transaction","detail":"{read_only:false; response_revision:361; number_of_response:1; }","duration":"209.889898ms","start":"2025-10-18T09:11:05.028202Z","end":"2025-10-18T09:11:05.238092Z","steps":["trace[1918107826] 'process raft request'  (duration: 209.822124ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:11:05.238121Z","caller":"traceutil/trace.go:172","msg":"trace[498431804] transaction","detail":"{read_only:false; response_revision:360; number_of_response:1; }","duration":"211.489175ms","start":"2025-10-18T09:11:05.026611Z","end":"2025-10-18T09:11:05.238100Z","steps":["trace[498431804] 'process raft request'  (duration: 80.379904ms)","trace[498431804] 'compare'  (duration: 130.29806ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T09:11:05.404536Z","caller":"traceutil/trace.go:172","msg":"trace[671931606] transaction","detail":"{read_only:false; number_of_response:1; response_revision:362; }","duration":"165.129764ms","start":"2025-10-18T09:11:05.239389Z","end":"2025-10-18T09:11:05.404519Z","steps":["trace[671931606] 'process raft request'  (duration: 127.044853ms)","trace[671931606] 'compare'  (duration: 38.002835ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T09:11:05.408703Z","caller":"traceutil/trace.go:172","msg":"trace[1161089971] transaction","detail":"{read_only:false; response_revision:363; number_of_response:1; }","duration":"164.711156ms","start":"2025-10-18T09:11:05.243970Z","end":"2025-10-18T09:11:05.408681Z","steps":["trace[1161089971] 'process raft request'  (duration: 164.60829ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T09:11:05.674579Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"141.145155ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873789411450250627 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/coredns-66bc5c9577.186f8addff9780d1\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/coredns-66bc5c9577.186f8addff9780d1\" value_size:622 lease:4650417374595474502 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-10-18T09:11:05.674879Z","caller":"traceutil/trace.go:172","msg":"trace[1382970621] transaction","detail":"{read_only:false; response_revision:364; number_of_response:1; }","duration":"267.378682ms","start":"2025-10-18T09:11:05.407479Z","end":"2025-10-18T09:11:05.674858Z","steps":["trace[1382970621] 'process raft request'  (duration: 125.894638ms)","trace[1382970621] 'compare'  (duration: 141.028875ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T09:11:05.674939Z","caller":"traceutil/trace.go:172","msg":"trace[323921766] transaction","detail":"{read_only:false; response_revision:365; number_of_response:1; }","duration":"265.344732ms","start":"2025-10-18T09:11:05.409582Z","end":"2025-10-18T09:11:05.674927Z","steps":["trace[323921766] 'process raft request'  (duration: 265.261095ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:11:05.888901Z","caller":"traceutil/trace.go:172","msg":"trace[1989917116] transaction","detail":"{read_only:false; response_revision:367; number_of_response:1; }","duration":"199.958561ms","start":"2025-10-18T09:11:05.688921Z","end":"2025-10-18T09:11:05.888880Z","steps":["trace[1989917116] 'process raft request'  (duration: 199.899084ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:11:05.889013Z","caller":"traceutil/trace.go:172","msg":"trace[1079851862] transaction","detail":"{read_only:false; response_revision:366; number_of_response:1; }","duration":"204.559158ms","start":"2025-10-18T09:11:05.684440Z","end":"2025-10-18T09:11:05.889000Z","steps":["trace[1079851862] 'process raft request'  (duration: 198.55483ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T09:11:06.258017Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"327.428878ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/kube-system/coredns-66bc5c9577\" limit:1 ","response":"range_response_count:1 size:4101"}
	{"level":"info","ts":"2025-10-18T09:11:06.258172Z","caller":"traceutil/trace.go:172","msg":"trace[1055821814] range","detail":"{range_begin:/registry/replicasets/kube-system/coredns-66bc5c9577; range_end:; response_count:1; response_revision:368; }","duration":"327.589826ms","start":"2025-10-18T09:11:05.930562Z","end":"2025-10-18T09:11:06.258152Z","steps":["trace[1055821814] 'agreement among raft nodes before linearized reading'  (duration: 89.808008ms)","trace[1055821814] 'range keys from in-memory index tree'  (duration: 237.574143ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T09:11:06.258238Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-18T09:11:05.930547Z","time spent":"327.658605ms","remote":"127.0.0.1:38916","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":1,"response size":4124,"request content":"key:\"/registry/replicasets/kube-system/coredns-66bc5c9577\" limit:1 "}
	{"level":"warn","ts":"2025-10-18T09:11:06.258267Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"237.795404ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873789411450250639 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:363 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:4336 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-18T09:11:06.258492Z","caller":"traceutil/trace.go:172","msg":"trace[1721373907] transaction","detail":"{read_only:false; response_revision:370; number_of_response:1; }","duration":"359.274354ms","start":"2025-10-18T09:11:05.899207Z","end":"2025-10-18T09:11:06.258482Z","steps":["trace[1721373907] 'process raft request'  (duration: 359.12572ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:11:06.258551Z","caller":"traceutil/trace.go:172","msg":"trace[1572380880] transaction","detail":"{read_only:false; response_revision:369; number_of_response:1; }","duration":"362.211074ms","start":"2025-10-18T09:11:05.896322Z","end":"2025-10-18T09:11:06.258533Z","steps":["trace[1572380880] 'process raft request'  (duration: 124.071174ms)","trace[1572380880] 'compare'  (duration: 237.563654ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T09:11:06.258632Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-18T09:11:05.896303Z","time spent":"362.286633ms","remote":"127.0.0.1:38868","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4385,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:363 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:4336 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >"}
	{"level":"warn","ts":"2025-10-18T09:11:06.258566Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-18T09:11:05.899188Z","time spent":"359.338027ms","remote":"127.0.0.1:38290","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5389,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kindnet-kbtnf\" mod_revision:335 > success:<request_put:<key:\"/registry/pods/kube-system/kindnet-kbtnf\" value_size:5341 >> failure:<request_range:<key:\"/registry/pods/kube-system/kindnet-kbtnf\" > >"}
	{"level":"info","ts":"2025-10-18T09:11:06.556547Z","caller":"traceutil/trace.go:172","msg":"trace[952638412] transaction","detail":"{read_only:false; response_revision:372; number_of_response:1; }","duration":"123.04278ms","start":"2025-10-18T09:11:06.433487Z","end":"2025-10-18T09:11:06.556529Z","steps":["trace[952638412] 'process raft request'  (duration: 120.423906ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:11:56 up 54 min,  0 user,  load average: 2.47, 2.58, 1.75
	Linux pause-182020 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [eaf98db334b9006e58ac3d2f70ad88ace993203f90b34acea96e2f6ddfbeaec8] <==
	I1018 09:11:04.753413       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 09:11:04.753711       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1018 09:11:04.753859       1 main.go:148] setting mtu 1500 for CNI 
	I1018 09:11:04.753876       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 09:11:04.753896       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T09:11:04Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 09:11:04.953548       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 09:11:04.953570       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 09:11:04.953582       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 09:11:04.953689       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1018 09:11:34.953864       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1018 09:11:34.953974       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1018 09:11:34.954204       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1018 09:11:34.954324       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1018 09:11:36.553727       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 09:11:36.553761       1 metrics.go:72] Registering metrics
	I1018 09:11:36.553826       1 controller.go:711] "Syncing nftables rules"
	I1018 09:11:44.959477       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1018 09:11:44.959532       1 main.go:301] handling current node
	I1018 09:11:54.961458       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1018 09:11:54.961494       1 main.go:301] handling current node
	
	
	==> kube-apiserver [12a42de0cfa9baf1a98897014767f47b63a1c6ae213ffaf923be29be984231c3] <==
	I1018 09:10:56.025587       1 policy_source.go:240] refreshing policies
	E1018 09:10:56.064805       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1018 09:10:56.110958       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 09:10:56.114586       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:10:56.114855       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1018 09:10:56.120832       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 09:10:56.120909       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:10:56.228155       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 09:10:56.914593       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1018 09:10:56.919307       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1018 09:10:56.919328       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 09:10:57.562984       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 09:10:57.607150       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 09:10:57.718086       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1018 09:10:57.724098       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1018 09:10:57.725215       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 09:10:57.729710       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 09:10:57.940797       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 09:10:58.734854       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 09:10:58.746210       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1018 09:10:58.756623       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 09:11:03.693224       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:11:03.698185       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:11:03.792197       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 09:11:03.993550       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [8807016bdc6e7d79c7d0284d142aa731ef1ac4dc315dba67d96ef49d194d63d2] <==
	I1018 09:11:02.928392       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 09:11:02.937869       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 09:11:02.939076       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 09:11:02.939113       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 09:11:02.939182       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 09:11:02.939239       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 09:11:02.939272       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1018 09:11:02.939315       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 09:11:02.939315       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 09:11:02.939464       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1018 09:11:02.939468       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1018 09:11:02.939573       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 09:11:02.939582       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 09:11:02.939643       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 09:11:02.939798       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 09:11:02.940040       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 09:11:02.940077       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 09:11:02.940397       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1018 09:11:02.940417       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 09:11:02.944650       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 09:11:02.950684       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:11:02.951965       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 09:11:02.960035       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 09:11:02.971520       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 09:11:47.895082       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [d7a65a8f4fe938ccc6475ccee4fbc8b3e900a2ff7eddc8b7274ea535e3bfcea8] <==
	I1018 09:11:04.641540       1 server_linux.go:53] "Using iptables proxy"
	I1018 09:11:04.716388       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 09:11:04.817414       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 09:11:04.817453       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1018 09:11:04.817537       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 09:11:04.839578       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 09:11:04.839644       1 server_linux.go:132] "Using iptables Proxier"
	I1018 09:11:04.846234       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 09:11:04.846637       1 server.go:527] "Version info" version="v1.34.1"
	I1018 09:11:04.846667       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:11:04.848057       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 09:11:04.848085       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 09:11:04.848111       1 config.go:200] "Starting service config controller"
	I1018 09:11:04.848116       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 09:11:04.848133       1 config.go:106] "Starting endpoint slice config controller"
	I1018 09:11:04.848138       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 09:11:04.848178       1 config.go:309] "Starting node config controller"
	I1018 09:11:04.848186       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 09:11:04.848192       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 09:11:04.948962       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 09:11:04.949017       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 09:11:04.949026       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [04e5f45aeb8859409b77f6e8ae85fde70680a8b11a8d7d93b61866eee5c7d370] <==
	E1018 09:10:55.985146       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 09:10:55.985322       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 09:10:55.988651       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 09:10:55.989920       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 09:10:55.989939       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 09:10:55.990039       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 09:10:55.990233       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 09:10:55.989939       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 09:10:55.990361       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 09:10:55.990412       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 09:10:55.990460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 09:10:56.830295       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 09:10:56.866281       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 09:10:56.878386       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 09:10:56.957763       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 09:10:56.966216       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1018 09:10:57.033386       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 09:10:57.089460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 09:10:57.100738       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 09:10:57.106289       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 09:10:57.154684       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 09:10:57.247627       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 09:10:57.265488       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 09:10:57.334056       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1018 09:10:59.776396       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 09:10:59 pause-182020 kubelet[1325]: E1018 09:10:59.601955    1325 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-182020\" already exists" pod="kube-system/kube-controller-manager-pause-182020"
	Oct 18 09:10:59 pause-182020 kubelet[1325]: I1018 09:10:59.619863    1325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-182020" podStartSLOduration=1.619788832 podStartE2EDuration="1.619788832s" podCreationTimestamp="2025-10-18 09:10:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:10:59.619440274 +0000 UTC m=+1.138004731" watchObservedRunningTime="2025-10-18 09:10:59.619788832 +0000 UTC m=+1.138353287"
	Oct 18 09:10:59 pause-182020 kubelet[1325]: I1018 09:10:59.643991    1325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-182020" podStartSLOduration=1.6439658769999999 podStartE2EDuration="1.643965877s" podCreationTimestamp="2025-10-18 09:10:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:10:59.631609422 +0000 UTC m=+1.150173889" watchObservedRunningTime="2025-10-18 09:10:59.643965877 +0000 UTC m=+1.162530336"
	Oct 18 09:10:59 pause-182020 kubelet[1325]: I1018 09:10:59.659665    1325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-182020" podStartSLOduration=2.659645183 podStartE2EDuration="2.659645183s" podCreationTimestamp="2025-10-18 09:10:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:10:59.644558749 +0000 UTC m=+1.163123206" watchObservedRunningTime="2025-10-18 09:10:59.659645183 +0000 UTC m=+1.178209640"
	Oct 18 09:10:59 pause-182020 kubelet[1325]: I1018 09:10:59.674092    1325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-182020" podStartSLOduration=1.674067733 podStartE2EDuration="1.674067733s" podCreationTimestamp="2025-10-18 09:10:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:10:59.659943008 +0000 UTC m=+1.178507456" watchObservedRunningTime="2025-10-18 09:10:59.674067733 +0000 UTC m=+1.192632190"
	Oct 18 09:11:02 pause-182020 kubelet[1325]: I1018 09:11:02.907041    1325 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 18 09:11:02 pause-182020 kubelet[1325]: I1018 09:11:02.907879    1325 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 18 09:11:04 pause-182020 kubelet[1325]: I1018 09:11:04.090948    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/89b40206-c23b-47d5-9f2c-e653f39823f8-cni-cfg\") pod \"kindnet-kbtnf\" (UID: \"89b40206-c23b-47d5-9f2c-e653f39823f8\") " pod="kube-system/kindnet-kbtnf"
	Oct 18 09:11:04 pause-182020 kubelet[1325]: I1018 09:11:04.091017    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/89b40206-c23b-47d5-9f2c-e653f39823f8-lib-modules\") pod \"kindnet-kbtnf\" (UID: \"89b40206-c23b-47d5-9f2c-e653f39823f8\") " pod="kube-system/kindnet-kbtnf"
	Oct 18 09:11:04 pause-182020 kubelet[1325]: I1018 09:11:04.091054    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bfck\" (UniqueName: \"kubernetes.io/projected/3cc0ae2c-2ad7-4ec3-9f13-9b19c44124bc-kube-api-access-4bfck\") pod \"kube-proxy-zlxhp\" (UID: \"3cc0ae2c-2ad7-4ec3-9f13-9b19c44124bc\") " pod="kube-system/kube-proxy-zlxhp"
	Oct 18 09:11:04 pause-182020 kubelet[1325]: I1018 09:11:04.091079    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/89b40206-c23b-47d5-9f2c-e653f39823f8-xtables-lock\") pod \"kindnet-kbtnf\" (UID: \"89b40206-c23b-47d5-9f2c-e653f39823f8\") " pod="kube-system/kindnet-kbtnf"
	Oct 18 09:11:04 pause-182020 kubelet[1325]: I1018 09:11:04.091102    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-876l9\" (UniqueName: \"kubernetes.io/projected/89b40206-c23b-47d5-9f2c-e653f39823f8-kube-api-access-876l9\") pod \"kindnet-kbtnf\" (UID: \"89b40206-c23b-47d5-9f2c-e653f39823f8\") " pod="kube-system/kindnet-kbtnf"
	Oct 18 09:11:04 pause-182020 kubelet[1325]: I1018 09:11:04.091413    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3cc0ae2c-2ad7-4ec3-9f13-9b19c44124bc-xtables-lock\") pod \"kube-proxy-zlxhp\" (UID: \"3cc0ae2c-2ad7-4ec3-9f13-9b19c44124bc\") " pod="kube-system/kube-proxy-zlxhp"
	Oct 18 09:11:04 pause-182020 kubelet[1325]: I1018 09:11:04.091489    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3cc0ae2c-2ad7-4ec3-9f13-9b19c44124bc-kube-proxy\") pod \"kube-proxy-zlxhp\" (UID: \"3cc0ae2c-2ad7-4ec3-9f13-9b19c44124bc\") " pod="kube-system/kube-proxy-zlxhp"
	Oct 18 09:11:04 pause-182020 kubelet[1325]: I1018 09:11:04.091527    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3cc0ae2c-2ad7-4ec3-9f13-9b19c44124bc-lib-modules\") pod \"kube-proxy-zlxhp\" (UID: \"3cc0ae2c-2ad7-4ec3-9f13-9b19c44124bc\") " pod="kube-system/kube-proxy-zlxhp"
	Oct 18 09:11:05 pause-182020 kubelet[1325]: I1018 09:11:05.891056    1325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zlxhp" podStartSLOduration=1.891036215 podStartE2EDuration="1.891036215s" podCreationTimestamp="2025-10-18 09:11:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:11:05.890415732 +0000 UTC m=+7.408980190" watchObservedRunningTime="2025-10-18 09:11:05.891036215 +0000 UTC m=+7.409600673"
	Oct 18 09:11:06 pause-182020 kubelet[1325]: I1018 09:11:06.260695    1325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-kbtnf" podStartSLOduration=2.26067086 podStartE2EDuration="2.26067086s" podCreationTimestamp="2025-10-18 09:11:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:11:06.260195569 +0000 UTC m=+7.778760030" watchObservedRunningTime="2025-10-18 09:11:06.26067086 +0000 UTC m=+7.779235318"
	Oct 18 09:11:45 pause-182020 kubelet[1325]: I1018 09:11:45.234354    1325 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 18 09:11:45 pause-182020 kubelet[1325]: I1018 09:11:45.288698    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a6a8dd2-6620-4f70-8754-7205c4c93f06-config-volume\") pod \"coredns-66bc5c9577-s4g4q\" (UID: \"2a6a8dd2-6620-4f70-8754-7205c4c93f06\") " pod="kube-system/coredns-66bc5c9577-s4g4q"
	Oct 18 09:11:45 pause-182020 kubelet[1325]: I1018 09:11:45.288765    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9lql\" (UniqueName: \"kubernetes.io/projected/2a6a8dd2-6620-4f70-8754-7205c4c93f06-kube-api-access-c9lql\") pod \"coredns-66bc5c9577-s4g4q\" (UID: \"2a6a8dd2-6620-4f70-8754-7205c4c93f06\") " pod="kube-system/coredns-66bc5c9577-s4g4q"
	Oct 18 09:11:45 pause-182020 kubelet[1325]: I1018 09:11:45.715501    1325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-s4g4q" podStartSLOduration=41.7154813 podStartE2EDuration="41.7154813s" podCreationTimestamp="2025-10-18 09:11:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:11:45.715213245 +0000 UTC m=+47.233777701" watchObservedRunningTime="2025-10-18 09:11:45.7154813 +0000 UTC m=+47.234045756"
	Oct 18 09:11:54 pause-182020 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 09:11:54 pause-182020 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 09:11:54 pause-182020 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 18 09:11:54 pause-182020 systemd[1]: kubelet.service: Consumed 2.329s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-182020 -n pause-182020
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-182020 -n pause-182020: exit status 2 (324.297139ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-182020 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-182020
helpers_test.go:243: (dbg) docker inspect pause-182020:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "baf0e51c83d2f433af5a5a4743421fc7956e698e7fab5fe7c0aec16c33c5044b",
	        "Created": "2025-10-18T09:10:42.215698218Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 213968,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T09:10:42.264857749Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/baf0e51c83d2f433af5a5a4743421fc7956e698e7fab5fe7c0aec16c33c5044b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/baf0e51c83d2f433af5a5a4743421fc7956e698e7fab5fe7c0aec16c33c5044b/hostname",
	        "HostsPath": "/var/lib/docker/containers/baf0e51c83d2f433af5a5a4743421fc7956e698e7fab5fe7c0aec16c33c5044b/hosts",
	        "LogPath": "/var/lib/docker/containers/baf0e51c83d2f433af5a5a4743421fc7956e698e7fab5fe7c0aec16c33c5044b/baf0e51c83d2f433af5a5a4743421fc7956e698e7fab5fe7c0aec16c33c5044b-json.log",
	        "Name": "/pause-182020",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-182020:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-182020",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "baf0e51c83d2f433af5a5a4743421fc7956e698e7fab5fe7c0aec16c33c5044b",
	                "LowerDir": "/var/lib/docker/overlay2/bf5a2bf4a307ee5fc5456cb16516a6e5f3e5897435b46d49813412d2bda54460-init/diff:/var/lib/docker/overlay2/76f783f469ac4c930bc111d7df4bd2b3a57bdcd762971c7ce0ba7a7b959771a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bf5a2bf4a307ee5fc5456cb16516a6e5f3e5897435b46d49813412d2bda54460/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bf5a2bf4a307ee5fc5456cb16516a6e5f3e5897435b46d49813412d2bda54460/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bf5a2bf4a307ee5fc5456cb16516a6e5f3e5897435b46d49813412d2bda54460/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-182020",
	                "Source": "/var/lib/docker/volumes/pause-182020/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-182020",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-182020",
	                "name.minikube.sigs.k8s.io": "pause-182020",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "eadaf7d9dd66fe2ddd99d19b0879ed1750bebcc8f22f317cf111488bb5623698",
	            "SandboxKey": "/var/run/docker/netns/eadaf7d9dd66",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33033"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33034"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33037"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33035"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33036"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-182020": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2e:3a:53:cd:4e:b0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "34e553f31b74848eb25394f53660a6e6a3a7608b4a92afc6c1411bb7365b42f1",
	                    "EndpointID": "8bcc8d78c83c4629118298bf34719c2b26ab4ce9c959efde052fc30e25e69c9a",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-182020",
	                        "baf0e51c83d2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-182020 -n pause-182020
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-182020 -n pause-182020: exit status 2 (349.74472ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-182020 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-182020 logs -n 25: (1.075885678s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                           ARGS                                                                                                            │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p running-upgrade-152288 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                  │ running-upgrade-152288    │ jenkins │ v1.37.0 │ 18 Oct 25 09:10 UTC │ 18 Oct 25 09:10 UTC │
	│ delete  │ -p missing-upgrade-196626                                                                                                                                                                                                 │ missing-upgrade-196626    │ jenkins │ v1.37.0 │ 18 Oct 25 09:10 UTC │ 18 Oct 25 09:10 UTC │
	│ start   │ -p force-systemd-env-980759 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                │ force-systemd-env-980759  │ jenkins │ v1.37.0 │ 18 Oct 25 09:10 UTC │ 18 Oct 25 09:10 UTC │
	│ ssh     │ force-systemd-flag-619251 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                      │ force-systemd-flag-619251 │ jenkins │ v1.37.0 │ 18 Oct 25 09:10 UTC │ 18 Oct 25 09:10 UTC │
	│ delete  │ -p force-systemd-flag-619251                                                                                                                                                                                              │ force-systemd-flag-619251 │ jenkins │ v1.37.0 │ 18 Oct 25 09:10 UTC │ 18 Oct 25 09:10 UTC │
	│ start   │ -p cert-expiration-558693 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                    │ cert-expiration-558693    │ jenkins │ v1.37.0 │ 18 Oct 25 09:10 UTC │ 18 Oct 25 09:10 UTC │
	│ delete  │ -p running-upgrade-152288                                                                                                                                                                                                 │ running-upgrade-152288    │ jenkins │ v1.37.0 │ 18 Oct 25 09:10 UTC │ 18 Oct 25 09:10 UTC │
	│ delete  │ -p force-systemd-env-980759                                                                                                                                                                                               │ force-systemd-env-980759  │ jenkins │ v1.37.0 │ 18 Oct 25 09:10 UTC │ 18 Oct 25 09:10 UTC │
	│ start   │ -p cert-options-043492 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio │ cert-options-043492       │ jenkins │ v1.37.0 │ 18 Oct 25 09:10 UTC │ 18 Oct 25 09:10 UTC │
	│ start   │ -p pause-182020 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                                                                                                 │ pause-182020              │ jenkins │ v1.37.0 │ 18 Oct 25 09:10 UTC │ 18 Oct 25 09:11 UTC │
	│ ssh     │ cert-options-043492 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                               │ cert-options-043492       │ jenkins │ v1.37.0 │ 18 Oct 25 09:10 UTC │ 18 Oct 25 09:10 UTC │
	│ ssh     │ -p cert-options-043492 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                             │ cert-options-043492       │ jenkins │ v1.37.0 │ 18 Oct 25 09:10 UTC │ 18 Oct 25 09:10 UTC │
	│ delete  │ -p cert-options-043492                                                                                                                                                                                                    │ cert-options-043492       │ jenkins │ v1.37.0 │ 18 Oct 25 09:10 UTC │ 18 Oct 25 09:11 UTC │
	│ start   │ -p NoKubernetes-548249 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                             │ NoKubernetes-548249       │ jenkins │ v1.37.0 │ 18 Oct 25 09:11 UTC │                     │
	│ start   │ -p NoKubernetes-548249 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                     │ NoKubernetes-548249       │ jenkins │ v1.37.0 │ 18 Oct 25 09:11 UTC │ 18 Oct 25 09:11 UTC │
	│ start   │ -p NoKubernetes-548249 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                     │ NoKubernetes-548249       │ jenkins │ v1.37.0 │ 18 Oct 25 09:11 UTC │ 18 Oct 25 09:11 UTC │
	│ delete  │ -p NoKubernetes-548249                                                                                                                                                                                                    │ NoKubernetes-548249       │ jenkins │ v1.37.0 │ 18 Oct 25 09:11 UTC │ 18 Oct 25 09:11 UTC │
	│ start   │ -p NoKubernetes-548249 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                     │ NoKubernetes-548249       │ jenkins │ v1.37.0 │ 18 Oct 25 09:11 UTC │ 18 Oct 25 09:11 UTC │
	│ ssh     │ -p NoKubernetes-548249 sudo systemctl is-active --quiet service kubelet                                                                                                                                                   │ NoKubernetes-548249       │ jenkins │ v1.37.0 │ 18 Oct 25 09:11 UTC │                     │
	│ stop    │ -p NoKubernetes-548249                                                                                                                                                                                                    │ NoKubernetes-548249       │ jenkins │ v1.37.0 │ 18 Oct 25 09:11 UTC │ 18 Oct 25 09:11 UTC │
	│ start   │ -p pause-182020 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                          │ pause-182020              │ jenkins │ v1.37.0 │ 18 Oct 25 09:11 UTC │ 18 Oct 25 09:11 UTC │
	│ start   │ -p NoKubernetes-548249 --driver=docker  --container-runtime=crio                                                                                                                                                          │ NoKubernetes-548249       │ jenkins │ v1.37.0 │ 18 Oct 25 09:11 UTC │ 18 Oct 25 09:11 UTC │
	│ pause   │ -p pause-182020 --alsologtostderr -v=5                                                                                                                                                                                    │ pause-182020              │ jenkins │ v1.37.0 │ 18 Oct 25 09:11 UTC │                     │
	│ ssh     │ -p NoKubernetes-548249 sudo systemctl is-active --quiet service kubelet                                                                                                                                                   │ NoKubernetes-548249       │ jenkins │ v1.37.0 │ 18 Oct 25 09:11 UTC │                     │
	│ delete  │ -p NoKubernetes-548249                                                                                                                                                                                                    │ NoKubernetes-548249       │ jenkins │ v1.37.0 │ 18 Oct 25 09:11 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 09:11:48
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 09:11:48.756862  227604 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:11:48.757168  227604 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:11:48.757173  227604 out.go:374] Setting ErrFile to fd 2...
	I1018 09:11:48.757178  227604 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:11:48.757487  227604 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-5897/.minikube/bin
	I1018 09:11:48.757947  227604 out.go:368] Setting JSON to false
	I1018 09:11:48.759174  227604 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3257,"bootTime":1760775452,"procs":288,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 09:11:48.759258  227604 start.go:141] virtualization: kvm guest
	I1018 09:11:48.761298  227604 out.go:179] * [NoKubernetes-548249] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 09:11:48.763160  227604 out.go:179]   - MINIKUBE_LOCATION=21767
	I1018 09:11:48.763161  227604 notify.go:220] Checking for updates...
	I1018 09:11:48.764602  227604 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:11:48.766399  227604 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-5897/kubeconfig
	I1018 09:11:48.767778  227604 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-5897/.minikube
	I1018 09:11:48.769106  227604 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 09:11:48.770203  227604 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:11:48.772079  227604 config.go:182] Loaded profile config "NoKubernetes-548249": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1018 09:11:48.772788  227604 start.go:1804] No Kubernetes version set for minikube, setting Kubernetes version to v0.0.0
	I1018 09:11:48.772809  227604 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:11:48.801542  227604 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 09:11:48.801631  227604 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:11:48.867868  227604 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-18 09:11:48.856621704 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:11:48.867997  227604 docker.go:318] overlay module found
	I1018 09:11:48.869707  227604 out.go:179] * Using the docker driver based on existing profile
	I1018 09:11:48.870733  227604 start.go:305] selected driver: docker
	I1018 09:11:48.870740  227604 start.go:925] validating driver "docker" against &{Name:NoKubernetes-548249 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-548249 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:11:48.870811  227604 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:11:48.870886  227604 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:11:48.934015  227604 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-18 09:11:48.924031616 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:11:48.934870  227604 cni.go:84] Creating CNI manager for ""
	I1018 09:11:48.934934  227604 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:11:48.934987  227604 start.go:349] cluster config:
	{Name:NoKubernetes-548249 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-548249 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:11:48.937423  227604 out.go:179] * Starting minikube without Kubernetes in cluster NoKubernetes-548249
	I1018 09:11:48.938499  227604 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 09:11:48.939774  227604 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 09:11:48.940850  227604 preload.go:183] Checking if preload exists for k8s version v0.0.0 and runtime crio
	I1018 09:11:48.940973  227604 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 09:11:48.962777  227604 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 09:11:48.962792  227604 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	W1018 09:11:48.974619  227604 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	W1018 09:11:49.062127  227604 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1018 09:11:49.062294  227604 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/NoKubernetes-548249/config.json ...
	I1018 09:11:49.062612  227604 cache.go:232] Successfully downloaded all kic artifacts
	I1018 09:11:49.062642  227604 start.go:360] acquireMachinesLock for NoKubernetes-548249: {Name:mk4f850bb94a692e49c3051cb30c34ec05dbe073 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:11:49.062723  227604 start.go:364] duration metric: took 50.017µs to acquireMachinesLock for "NoKubernetes-548249"
	I1018 09:11:49.062739  227604 start.go:96] Skipping create...Using existing machine configuration
	I1018 09:11:49.062744  227604 fix.go:54] fixHost starting: 
	I1018 09:11:49.063055  227604 cli_runner.go:164] Run: docker container inspect NoKubernetes-548249 --format={{.State.Status}}
	I1018 09:11:49.082776  227604 fix.go:112] recreateIfNeeded on NoKubernetes-548249: state=Stopped err=<nil>
	W1018 09:11:49.082796  227604 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 09:11:47.753419  189686 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:11:47.753889  189686 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:11:47.753956  189686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 09:11:47.754017  189686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 09:11:47.797776  189686 cri.go:89] found id: "7027c49f3b8f3e631a42588b854b959dc67b4015fbdcf089da949a1930646b4b"
	I1018 09:11:47.797799  189686 cri.go:89] found id: ""
	I1018 09:11:47.797809  189686 logs.go:282] 1 containers: [7027c49f3b8f3e631a42588b854b959dc67b4015fbdcf089da949a1930646b4b]
	I1018 09:11:47.797867  189686 ssh_runner.go:195] Run: which crictl
	I1018 09:11:47.802491  189686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 09:11:47.802573  189686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 09:11:47.836243  189686 cri.go:89] found id: ""
	I1018 09:11:47.836331  189686 logs.go:282] 0 containers: []
	W1018 09:11:47.836357  189686 logs.go:284] No container was found matching "etcd"
	I1018 09:11:47.836365  189686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 09:11:47.836435  189686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 09:11:47.873575  189686 cri.go:89] found id: ""
	I1018 09:11:47.873666  189686 logs.go:282] 0 containers: []
	W1018 09:11:47.873694  189686 logs.go:284] No container was found matching "coredns"
	I1018 09:11:47.873704  189686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 09:11:47.873812  189686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 09:11:47.905125  189686 cri.go:89] found id: "86f01e214f69dadfddb8202ec7a69d76aaed7800772fd9291732c69639a6ff2e"
	I1018 09:11:47.905153  189686 cri.go:89] found id: ""
	I1018 09:11:47.905163  189686 logs.go:282] 1 containers: [86f01e214f69dadfddb8202ec7a69d76aaed7800772fd9291732c69639a6ff2e]
	I1018 09:11:47.905220  189686 ssh_runner.go:195] Run: which crictl
	I1018 09:11:47.909675  189686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 09:11:47.909764  189686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 09:11:47.942653  189686 cri.go:89] found id: ""
	I1018 09:11:47.942684  189686 logs.go:282] 0 containers: []
	W1018 09:11:47.942696  189686 logs.go:284] No container was found matching "kube-proxy"
	I1018 09:11:47.942703  189686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 09:11:47.942766  189686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 09:11:47.974447  189686 cri.go:89] found id: "20ce98cee7a65c567d8e5cab3d913b60c862afd25295d419c93e4cdab6c05312"
	I1018 09:11:47.974467  189686 cri.go:89] found id: ""
	I1018 09:11:47.974474  189686 logs.go:282] 1 containers: [20ce98cee7a65c567d8e5cab3d913b60c862afd25295d419c93e4cdab6c05312]
	I1018 09:11:47.974526  189686 ssh_runner.go:195] Run: which crictl
	I1018 09:11:47.978752  189686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 09:11:47.978829  189686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 09:11:48.008790  189686 cri.go:89] found id: ""
	I1018 09:11:48.008812  189686 logs.go:282] 0 containers: []
	W1018 09:11:48.008819  189686 logs.go:284] No container was found matching "kindnet"
	I1018 09:11:48.008825  189686 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 09:11:48.008868  189686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 09:11:48.041665  189686 cri.go:89] found id: ""
	I1018 09:11:48.041703  189686 logs.go:282] 0 containers: []
	W1018 09:11:48.041714  189686 logs.go:284] No container was found matching "storage-provisioner"
	I1018 09:11:48.041726  189686 logs.go:123] Gathering logs for CRI-O ...
	I1018 09:11:48.041740  189686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 09:11:48.094901  189686 logs.go:123] Gathering logs for container status ...
	I1018 09:11:48.094942  189686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 09:11:48.130656  189686 logs.go:123] Gathering logs for kubelet ...
	I1018 09:11:48.130694  189686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 09:11:48.222261  189686 logs.go:123] Gathering logs for dmesg ...
	I1018 09:11:48.222293  189686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 09:11:48.238937  189686 logs.go:123] Gathering logs for describe nodes ...
	I1018 09:11:48.238968  189686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 09:11:48.300040  189686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 09:11:48.300069  189686 logs.go:123] Gathering logs for kube-apiserver [7027c49f3b8f3e631a42588b854b959dc67b4015fbdcf089da949a1930646b4b] ...
	I1018 09:11:48.300085  189686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7027c49f3b8f3e631a42588b854b959dc67b4015fbdcf089da949a1930646b4b"
	I1018 09:11:48.337429  189686 logs.go:123] Gathering logs for kube-scheduler [86f01e214f69dadfddb8202ec7a69d76aaed7800772fd9291732c69639a6ff2e] ...
	I1018 09:11:48.337464  189686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 86f01e214f69dadfddb8202ec7a69d76aaed7800772fd9291732c69639a6ff2e"
	I1018 09:11:48.391528  189686 logs.go:123] Gathering logs for kube-controller-manager [20ce98cee7a65c567d8e5cab3d913b60c862afd25295d419c93e4cdab6c05312] ...
	I1018 09:11:48.391560  189686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 20ce98cee7a65c567d8e5cab3d913b60c862afd25295d419c93e4cdab6c05312"
	I1018 09:11:47.947904  227009 out.go:252] * Updating the running docker "pause-182020" container ...
	I1018 09:11:47.947938  227009 machine.go:93] provisionDockerMachine start ...
	I1018 09:11:47.948012  227009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-182020
	I1018 09:11:47.970012  227009 main.go:141] libmachine: Using SSH client type: native
	I1018 09:11:47.970268  227009 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33033 <nil> <nil>}
	I1018 09:11:47.970281  227009 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:11:48.126230  227009 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-182020
	
	I1018 09:11:48.126261  227009 ubuntu.go:182] provisioning hostname "pause-182020"
	I1018 09:11:48.126331  227009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-182020
	I1018 09:11:48.146923  227009 main.go:141] libmachine: Using SSH client type: native
	I1018 09:11:48.147185  227009 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33033 <nil> <nil>}
	I1018 09:11:48.147201  227009 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-182020 && echo "pause-182020" | sudo tee /etc/hostname
	I1018 09:11:48.297730  227009 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-182020
	
	I1018 09:11:48.297812  227009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-182020
	I1018 09:11:48.318678  227009 main.go:141] libmachine: Using SSH client type: native
	I1018 09:11:48.318984  227009 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33033 <nil> <nil>}
	I1018 09:11:48.319010  227009 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-182020' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-182020/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-182020' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:11:48.458105  227009 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:11:48.458131  227009 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-5897/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-5897/.minikube}
	I1018 09:11:48.458147  227009 ubuntu.go:190] setting up certificates
	I1018 09:11:48.458158  227009 provision.go:84] configureAuth start
	I1018 09:11:48.458228  227009 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-182020
	I1018 09:11:48.477775  227009 provision.go:143] copyHostCerts
	I1018 09:11:48.477836  227009 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem, removing ...
	I1018 09:11:48.477864  227009 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem
	I1018 09:11:48.477952  227009 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem (1078 bytes)
	I1018 09:11:48.478086  227009 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem, removing ...
	I1018 09:11:48.478101  227009 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem
	I1018 09:11:48.478145  227009 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem (1123 bytes)
	I1018 09:11:48.478255  227009 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem, removing ...
	I1018 09:11:48.478267  227009 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem
	I1018 09:11:48.478305  227009 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem (1675 bytes)
	I1018 09:11:48.478417  227009 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem org=jenkins.pause-182020 san=[127.0.0.1 192.168.103.2 localhost minikube pause-182020]
	I1018 09:11:48.624138  227009 provision.go:177] copyRemoteCerts
	I1018 09:11:48.624193  227009 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:11:48.624232  227009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-182020
	I1018 09:11:48.644806  227009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/pause-182020/id_rsa Username:docker}
	I1018 09:11:48.744883  227009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:11:48.765175  227009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1018 09:11:48.786491  227009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 09:11:48.808479  227009 provision.go:87] duration metric: took 350.308619ms to configureAuth
	I1018 09:11:48.808529  227009 ubuntu.go:206] setting minikube options for container-runtime
	I1018 09:11:48.808764  227009 config.go:182] Loaded profile config "pause-182020": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:11:48.808885  227009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-182020
	I1018 09:11:48.830885  227009 main.go:141] libmachine: Using SSH client type: native
	I1018 09:11:48.831200  227009 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33033 <nil> <nil>}
	I1018 09:11:48.831226  227009 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:11:49.137749  227009 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:11:49.137774  227009 machine.go:96] duration metric: took 1.189827356s to provisionDockerMachine
	I1018 09:11:49.137787  227009 start.go:293] postStartSetup for "pause-182020" (driver="docker")
	I1018 09:11:49.137799  227009 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:11:49.137858  227009 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:11:49.137920  227009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-182020
	I1018 09:11:49.158515  227009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/pause-182020/id_rsa Username:docker}
	I1018 09:11:49.260998  227009 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:11:49.265288  227009 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 09:11:49.265320  227009 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 09:11:49.265333  227009 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-5897/.minikube/addons for local assets ...
	I1018 09:11:49.265409  227009 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-5897/.minikube/files for local assets ...
	I1018 09:11:49.265523  227009 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem -> 93942.pem in /etc/ssl/certs
	I1018 09:11:49.265645  227009 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:11:49.273924  227009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem --> /etc/ssl/certs/93942.pem (1708 bytes)
	I1018 09:11:49.293360  227009 start.go:296] duration metric: took 155.545423ms for postStartSetup
	I1018 09:11:49.293496  227009 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:11:49.293557  227009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-182020
	I1018 09:11:49.314368  227009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/pause-182020/id_rsa Username:docker}
	I1018 09:11:49.412866  227009 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 09:11:49.419084  227009 fix.go:56] duration metric: took 1.495499236s for fixHost
	I1018 09:11:49.419107  227009 start.go:83] releasing machines lock for "pause-182020", held for 1.49554297s
	I1018 09:11:49.419161  227009 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-182020
	I1018 09:11:49.438223  227009 ssh_runner.go:195] Run: cat /version.json
	I1018 09:11:49.438278  227009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-182020
	I1018 09:11:49.438306  227009 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:11:49.438391  227009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-182020
	I1018 09:11:49.458939  227009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/pause-182020/id_rsa Username:docker}
	I1018 09:11:49.459691  227009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/pause-182020/id_rsa Username:docker}
	I1018 09:11:49.625141  227009 ssh_runner.go:195] Run: systemctl --version
	I1018 09:11:49.632022  227009 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:11:49.679764  227009 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:11:49.685559  227009 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:11:49.685629  227009 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:11:49.694663  227009 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 09:11:49.694686  227009 start.go:495] detecting cgroup driver to use...
	I1018 09:11:49.694720  227009 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 09:11:49.694759  227009 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:11:49.711957  227009 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:11:49.725459  227009 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:11:49.725522  227009 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:11:49.742476  227009 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:11:49.756466  227009 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:11:49.868275  227009 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:11:49.979406  227009 docker.go:234] disabling docker service ...
	I1018 09:11:49.979480  227009 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:11:49.994864  227009 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:11:50.008366  227009 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:11:50.116263  227009 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:11:50.229932  227009 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:11:50.243903  227009 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:11:50.260011  227009 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 09:11:50.260067  227009 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:11:50.271370  227009 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 09:11:50.271441  227009 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:11:50.281185  227009 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:11:50.290893  227009 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:11:50.300647  227009 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:11:50.309429  227009 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:11:50.318945  227009 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:11:50.328441  227009 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:11:50.338249  227009 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:11:50.346364  227009 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:11:50.354889  227009 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:11:50.460260  227009 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:11:50.615826  227009 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:11:50.615886  227009 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:11:50.620528  227009 start.go:563] Will wait 60s for crictl version
	I1018 09:11:50.620586  227009 ssh_runner.go:195] Run: which crictl
	I1018 09:11:50.625263  227009 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 09:11:50.650201  227009 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 09:11:50.650281  227009 ssh_runner.go:195] Run: crio --version
	I1018 09:11:50.679769  227009 ssh_runner.go:195] Run: crio --version
	I1018 09:11:50.711204  227009 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 09:11:50.712601  227009 cli_runner.go:164] Run: docker network inspect pause-182020 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:11:50.731466  227009 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1018 09:11:50.736136  227009 kubeadm.go:883] updating cluster {Name:pause-182020 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-182020 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regis
try-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:11:50.736292  227009 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:11:50.736338  227009 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:11:50.769211  227009 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:11:50.769234  227009 crio.go:433] Images already preloaded, skipping extraction
	I1018 09:11:50.769286  227009 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:11:50.795215  227009 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:11:50.795239  227009 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:11:50.795250  227009 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1018 09:11:50.795389  227009 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-182020 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-182020 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:11:50.795474  227009 ssh_runner.go:195] Run: crio config
	I1018 09:11:50.841887  227009 cni.go:84] Creating CNI manager for ""
	I1018 09:11:50.841905  227009 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:11:50.841917  227009 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 09:11:50.841936  227009 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-182020 NodeName:pause-182020 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:11:50.842070  227009 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-182020"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:11:50.842129  227009 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 09:11:50.850866  227009 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:11:50.850930  227009 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:11:50.859178  227009 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1018 09:11:50.873318  227009 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:11:50.886880  227009 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1018 09:11:50.900234  227009 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1018 09:11:50.904356  227009 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:11:51.025130  227009 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:11:51.040251  227009 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/pause-182020 for IP: 192.168.103.2
	I1018 09:11:51.040269  227009 certs.go:195] generating shared ca certs ...
	I1018 09:11:51.040283  227009 certs.go:227] acquiring lock for ca certs: {Name:mk550b60d986fbbdf7b5e0015c56234b739f3162 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:11:51.040476  227009 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-5897/.minikube/ca.key
	I1018 09:11:51.040526  227009 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.key
	I1018 09:11:51.040539  227009 certs.go:257] generating profile certs ...
	I1018 09:11:51.040635  227009 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/pause-182020/client.key
	I1018 09:11:51.040726  227009 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/pause-182020/apiserver.key.71be5d44
	I1018 09:11:51.040785  227009 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/pause-182020/proxy-client.key
	I1018 09:11:51.040926  227009 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/9394.pem (1338 bytes)
	W1018 09:11:51.040968  227009 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-5897/.minikube/certs/9394_empty.pem, impossibly tiny 0 bytes
	I1018 09:11:51.040979  227009 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 09:11:51.041019  227009 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:11:51.041050  227009 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:11:51.041076  227009 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem (1675 bytes)
	I1018 09:11:51.041131  227009 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem (1708 bytes)
	I1018 09:11:51.041928  227009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:11:51.063571  227009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 09:11:51.084104  227009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:11:51.104545  227009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 09:11:51.124915  227009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/pause-182020/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1018 09:11:51.146461  227009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/pause-182020/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 09:11:51.166121  227009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/pause-182020/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:11:51.186287  227009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/pause-182020/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1018 09:11:51.206889  227009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:11:51.227271  227009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/certs/9394.pem --> /usr/share/ca-certificates/9394.pem (1338 bytes)
	I1018 09:11:51.248204  227009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem --> /usr/share/ca-certificates/93942.pem (1708 bytes)
	I1018 09:11:51.269233  227009 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:11:51.284060  227009 ssh_runner.go:195] Run: openssl version
	I1018 09:11:51.290675  227009 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:11:51.299969  227009 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:11:51.304066  227009 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:11:51.304118  227009 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:11:51.343765  227009 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:11:51.352322  227009 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9394.pem && ln -fs /usr/share/ca-certificates/9394.pem /etc/ssl/certs/9394.pem"
	I1018 09:11:51.362547  227009 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9394.pem
	I1018 09:11:51.366485  227009 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 08:35 /usr/share/ca-certificates/9394.pem
	I1018 09:11:51.366552  227009 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9394.pem
	I1018 09:11:51.402911  227009 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9394.pem /etc/ssl/certs/51391683.0"
	I1018 09:11:51.412173  227009 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93942.pem && ln -fs /usr/share/ca-certificates/93942.pem /etc/ssl/certs/93942.pem"
	I1018 09:11:51.421497  227009 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93942.pem
	I1018 09:11:51.425780  227009 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 08:35 /usr/share/ca-certificates/93942.pem
	I1018 09:11:51.425846  227009 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93942.pem
	I1018 09:11:51.463712  227009 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93942.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:11:51.473138  227009 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:11:51.477617  227009 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 09:11:51.515747  227009 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 09:11:51.553825  227009 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 09:11:51.591223  227009 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 09:11:51.628388  227009 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 09:11:51.665067  227009 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 09:11:51.700251  227009 kubeadm.go:400] StartCluster: {Name:pause-182020 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-182020 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry
-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:11:51.700413  227009 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:11:51.700466  227009 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:11:51.729888  227009 cri.go:89] found id: "f5ca822f709a1b9798c695190a4248e26ef175c5318934a88c07b7b1904bff04"
	I1018 09:11:51.729910  227009 cri.go:89] found id: "d7a65a8f4fe938ccc6475ccee4fbc8b3e900a2ff7eddc8b7274ea535e3bfcea8"
	I1018 09:11:51.729914  227009 cri.go:89] found id: "eaf98db334b9006e58ac3d2f70ad88ace993203f90b34acea96e2f6ddfbeaec8"
	I1018 09:11:51.729917  227009 cri.go:89] found id: "8807016bdc6e7d79c7d0284d142aa731ef1ac4dc315dba67d96ef49d194d63d2"
	I1018 09:11:51.729921  227009 cri.go:89] found id: "04e5f45aeb8859409b77f6e8ae85fde70680a8b11a8d7d93b61866eee5c7d370"
	I1018 09:11:51.729924  227009 cri.go:89] found id: "12a42de0cfa9baf1a98897014767f47b63a1c6ae213ffaf923be29be984231c3"
	I1018 09:11:51.729926  227009 cri.go:89] found id: "d92c7b180dca0476851d7f6127a563b17c31e334cb49a9c36a31c2355cb3eeae"
	I1018 09:11:51.729929  227009 cri.go:89] found id: ""
	I1018 09:11:51.729969  227009 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 09:11:51.741870  227009 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:11:51Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:11:51.741992  227009 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:11:51.750734  227009 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 09:11:51.750759  227009 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 09:11:51.750800  227009 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 09:11:51.758868  227009 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 09:11:51.759648  227009 kubeconfig.go:125] found "pause-182020" server: "https://192.168.103.2:8443"
	I1018 09:11:51.760477  227009 kapi.go:59] client config for pause-182020: &rest.Config{Host:"https://192.168.103.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21767-5897/.minikube/profiles/pause-182020/client.crt", KeyFile:"/home/jenkins/minikube-integration/21767-5897/.minikube/profiles/pause-182020/client.key", CAFile:"/home/jenkins/minikube-integration/21767-5897/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string
(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1018 09:11:51.760880  227009 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1018 09:11:51.760893  227009 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1018 09:11:51.760898  227009 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1018 09:11:51.760902  227009 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1018 09:11:51.760905  227009 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1018 09:11:51.761186  227009 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 09:11:51.769619  227009 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.103.2
	I1018 09:11:51.769650  227009 kubeadm.go:601] duration metric: took 18.885507ms to restartPrimaryControlPlane
	I1018 09:11:51.769661  227009 kubeadm.go:402] duration metric: took 69.420174ms to StartCluster
	I1018 09:11:51.769681  227009 settings.go:142] acquiring lock: {Name:mk177870d6cf7000f95346d8b9c104ade730278a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:11:51.769756  227009 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-5897/kubeconfig
	I1018 09:11:51.770738  227009 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/kubeconfig: {Name:mkbdf22e0f6c3f9f36e4cde3352b620d43ace448 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:11:51.770957  227009 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:11:51.771023  227009 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 09:11:51.771143  227009 config.go:182] Loaded profile config "pause-182020": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:11:51.774122  227009 out.go:179] * Verifying Kubernetes components...
	I1018 09:11:51.774140  227009 out.go:179] * Enabled addons: 
	I1018 09:11:51.775400  227009 addons.go:514] duration metric: took 4.384353ms for enable addons: enabled=[]
	I1018 09:11:51.775437  227009 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:11:51.888528  227009 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:11:51.903163  227009 node_ready.go:35] waiting up to 6m0s for node "pause-182020" to be "Ready" ...
	I1018 09:11:51.911196  227009 node_ready.go:49] node "pause-182020" is "Ready"
	I1018 09:11:51.911229  227009 node_ready.go:38] duration metric: took 8.024706ms for node "pause-182020" to be "Ready" ...
	I1018 09:11:51.911246  227009 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:11:51.911298  227009 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:11:51.923793  227009 api_server.go:72] duration metric: took 152.807283ms to wait for apiserver process to appear ...
	I1018 09:11:51.923818  227009 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:11:51.923839  227009 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1018 09:11:51.929415  227009 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1018 09:11:51.930444  227009 api_server.go:141] control plane version: v1.34.1
	I1018 09:11:51.930476  227009 api_server.go:131] duration metric: took 6.650412ms to wait for apiserver health ...
	I1018 09:11:51.930487  227009 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:11:51.934198  227009 system_pods.go:59] 7 kube-system pods found
	I1018 09:11:51.934235  227009 system_pods.go:61] "coredns-66bc5c9577-s4g4q" [2a6a8dd2-6620-4f70-8754-7205c4c93f06] Running
	I1018 09:11:51.934241  227009 system_pods.go:61] "etcd-pause-182020" [e0d74bbe-9879-4ae2-88d8-5c9bcba491ab] Running
	I1018 09:11:51.934245  227009 system_pods.go:61] "kindnet-kbtnf" [89b40206-c23b-47d5-9f2c-e653f39823f8] Running
	I1018 09:11:51.934249  227009 system_pods.go:61] "kube-apiserver-pause-182020" [0b8729c9-cc0a-46ee-ac7a-5a389356ab45] Running
	I1018 09:11:51.934252  227009 system_pods.go:61] "kube-controller-manager-pause-182020" [6892a1ce-d835-4e60-a76b-06a6341903dc] Running
	I1018 09:11:51.934256  227009 system_pods.go:61] "kube-proxy-zlxhp" [3cc0ae2c-2ad7-4ec3-9f13-9b19c44124bc] Running
	I1018 09:11:51.934259  227009 system_pods.go:61] "kube-scheduler-pause-182020" [a2ebf54c-64d2-4247-ac81-3f15ff9a32e8] Running
	I1018 09:11:51.934265  227009 system_pods.go:74] duration metric: took 3.77155ms to wait for pod list to return data ...
	I1018 09:11:51.934275  227009 default_sa.go:34] waiting for default service account to be created ...
	I1018 09:11:51.936329  227009 default_sa.go:45] found service account: "default"
	I1018 09:11:51.936377  227009 default_sa.go:55] duration metric: took 2.094236ms for default service account to be created ...
	I1018 09:11:51.936388  227009 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 09:11:51.939074  227009 system_pods.go:86] 7 kube-system pods found
	I1018 09:11:51.939103  227009 system_pods.go:89] "coredns-66bc5c9577-s4g4q" [2a6a8dd2-6620-4f70-8754-7205c4c93f06] Running
	I1018 09:11:51.939112  227009 system_pods.go:89] "etcd-pause-182020" [e0d74bbe-9879-4ae2-88d8-5c9bcba491ab] Running
	I1018 09:11:51.939118  227009 system_pods.go:89] "kindnet-kbtnf" [89b40206-c23b-47d5-9f2c-e653f39823f8] Running
	I1018 09:11:51.939123  227009 system_pods.go:89] "kube-apiserver-pause-182020" [0b8729c9-cc0a-46ee-ac7a-5a389356ab45] Running
	I1018 09:11:51.939129  227009 system_pods.go:89] "kube-controller-manager-pause-182020" [6892a1ce-d835-4e60-a76b-06a6341903dc] Running
	I1018 09:11:51.939138  227009 system_pods.go:89] "kube-proxy-zlxhp" [3cc0ae2c-2ad7-4ec3-9f13-9b19c44124bc] Running
	I1018 09:11:51.939143  227009 system_pods.go:89] "kube-scheduler-pause-182020" [a2ebf54c-64d2-4247-ac81-3f15ff9a32e8] Running
	I1018 09:11:51.939156  227009 system_pods.go:126] duration metric: took 2.760776ms to wait for k8s-apps to be running ...
	I1018 09:11:51.939172  227009 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 09:11:51.939217  227009 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:11:51.952831  227009 system_svc.go:56] duration metric: took 13.651712ms WaitForService to wait for kubelet
	I1018 09:11:51.952860  227009 kubeadm.go:586] duration metric: took 181.878065ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:11:51.952881  227009 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:11:51.955723  227009 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 09:11:51.955751  227009 node_conditions.go:123] node cpu capacity is 8
	I1018 09:11:51.955766  227009 node_conditions.go:105] duration metric: took 2.879443ms to run NodePressure ...
	I1018 09:11:51.955779  227009 start.go:241] waiting for startup goroutines ...
	I1018 09:11:51.955789  227009 start.go:246] waiting for cluster config update ...
	I1018 09:11:51.955801  227009 start.go:255] writing updated cluster config ...
	I1018 09:11:51.956132  227009 ssh_runner.go:195] Run: rm -f paused
	I1018 09:11:51.960241  227009 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:11:51.960892  227009 kapi.go:59] client config for pause-182020: &rest.Config{Host:"https://192.168.103.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21767-5897/.minikube/profiles/pause-182020/client.crt", KeyFile:"/home/jenkins/minikube-integration/21767-5897/.minikube/profiles/pause-182020/client.key", CAFile:"/home/jenkins/minikube-integration/21767-5897/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string
(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1018 09:11:51.963601  227009 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-s4g4q" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:11:51.967801  227009 pod_ready.go:94] pod "coredns-66bc5c9577-s4g4q" is "Ready"
	I1018 09:11:51.967821  227009 pod_ready.go:86] duration metric: took 4.201245ms for pod "coredns-66bc5c9577-s4g4q" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:11:51.970097  227009 pod_ready.go:83] waiting for pod "etcd-pause-182020" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:11:51.974381  227009 pod_ready.go:94] pod "etcd-pause-182020" is "Ready"
	I1018 09:11:51.974433  227009 pod_ready.go:86] duration metric: took 4.286762ms for pod "etcd-pause-182020" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:11:51.976488  227009 pod_ready.go:83] waiting for pod "kube-apiserver-pause-182020" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:11:51.980284  227009 pod_ready.go:94] pod "kube-apiserver-pause-182020" is "Ready"
	I1018 09:11:51.980305  227009 pod_ready.go:86] duration metric: took 3.795009ms for pod "kube-apiserver-pause-182020" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:11:51.982339  227009 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-182020" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:11:52.364503  227009 pod_ready.go:94] pod "kube-controller-manager-pause-182020" is "Ready"
	I1018 09:11:52.364533  227009 pod_ready.go:86] duration metric: took 382.160273ms for pod "kube-controller-manager-pause-182020" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:11:52.564713  227009 pod_ready.go:83] waiting for pod "kube-proxy-zlxhp" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:11:52.963955  227009 pod_ready.go:94] pod "kube-proxy-zlxhp" is "Ready"
	I1018 09:11:52.963980  227009 pod_ready.go:86] duration metric: took 399.239645ms for pod "kube-proxy-zlxhp" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:11:53.164227  227009 pod_ready.go:83] waiting for pod "kube-scheduler-pause-182020" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:11:53.564891  227009 pod_ready.go:94] pod "kube-scheduler-pause-182020" is "Ready"
	I1018 09:11:53.564918  227009 pod_ready.go:86] duration metric: took 400.663805ms for pod "kube-scheduler-pause-182020" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:11:53.564933  227009 pod_ready.go:40] duration metric: took 1.604659511s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:11:53.614948  227009 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 09:11:53.617162  227009 out.go:179] * Done! kubectl is now configured to use "pause-182020" cluster and "default" namespace by default
	I1018 09:11:49.084715  227604 out.go:252] * Restarting existing docker container for "NoKubernetes-548249" ...
	I1018 09:11:49.084781  227604 cli_runner.go:164] Run: docker start NoKubernetes-548249
	I1018 09:11:49.356199  227604 cli_runner.go:164] Run: docker container inspect NoKubernetes-548249 --format={{.State.Status}}
	I1018 09:11:49.377687  227604 kic.go:430] container "NoKubernetes-548249" state is running.
	I1018 09:11:49.378055  227604 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-548249
	I1018 09:11:49.398025  227604 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/NoKubernetes-548249/config.json ...
	I1018 09:11:49.398229  227604 machine.go:93] provisionDockerMachine start ...
	I1018 09:11:49.398285  227604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-548249
	I1018 09:11:49.419802  227604 main.go:141] libmachine: Using SSH client type: native
	I1018 09:11:49.420095  227604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33048 <nil> <nil>}
	I1018 09:11:49.420101  227604 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:11:49.420843  227604 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35120->127.0.0.1:33048: read: connection reset by peer
	I1018 09:11:52.557475  227604 main.go:141] libmachine: SSH cmd err, output: <nil>: NoKubernetes-548249
	
	I1018 09:11:52.557492  227604 ubuntu.go:182] provisioning hostname "NoKubernetes-548249"
	I1018 09:11:52.557552  227604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-548249
	I1018 09:11:52.577102  227604 main.go:141] libmachine: Using SSH client type: native
	I1018 09:11:52.577306  227604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33048 <nil> <nil>}
	I1018 09:11:52.577312  227604 main.go:141] libmachine: About to run SSH command:
	sudo hostname NoKubernetes-548249 && echo "NoKubernetes-548249" | sudo tee /etc/hostname
	I1018 09:11:52.724080  227604 main.go:141] libmachine: SSH cmd err, output: <nil>: NoKubernetes-548249
	
	I1018 09:11:52.724140  227604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-548249
	I1018 09:11:52.744059  227604 main.go:141] libmachine: Using SSH client type: native
	I1018 09:11:52.744270  227604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33048 <nil> <nil>}
	I1018 09:11:52.744281  227604 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sNoKubernetes-548249' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 NoKubernetes-548249/g' /etc/hosts;
				else 
					echo '127.0.1.1 NoKubernetes-548249' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:11:52.881686  227604 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:11:52.881712  227604 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-5897/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-5897/.minikube}
	I1018 09:11:52.881732  227604 ubuntu.go:190] setting up certificates
	I1018 09:11:52.881761  227604 provision.go:84] configureAuth start
	I1018 09:11:52.881815  227604 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-548249
	I1018 09:11:52.901482  227604 provision.go:143] copyHostCerts
	I1018 09:11:52.901536  227604 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem, removing ...
	I1018 09:11:52.901547  227604 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem
	I1018 09:11:52.901616  227604 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem (1078 bytes)
	I1018 09:11:52.901715  227604 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem, removing ...
	I1018 09:11:52.901718  227604 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem
	I1018 09:11:52.901741  227604 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem (1123 bytes)
	I1018 09:11:52.901812  227604 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem, removing ...
	I1018 09:11:52.901814  227604 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem
	I1018 09:11:52.901836  227604 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem (1675 bytes)
	I1018 09:11:52.901902  227604 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem org=jenkins.NoKubernetes-548249 san=[127.0.0.1 192.168.94.2 NoKubernetes-548249 localhost minikube]
	I1018 09:11:53.360978  227604 provision.go:177] copyRemoteCerts
	I1018 09:11:53.361029  227604 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:11:53.361062  227604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-548249
	I1018 09:11:53.381246  227604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/NoKubernetes-548249/id_rsa Username:docker}
	I1018 09:11:53.481786  227604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:11:53.500405  227604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1018 09:11:53.519843  227604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 09:11:53.538876  227604 provision.go:87] duration metric: took 657.102341ms to configureAuth
	I1018 09:11:53.538896  227604 ubuntu.go:206] setting minikube options for container-runtime
	I1018 09:11:53.539054  227604 config.go:182] Loaded profile config "NoKubernetes-548249": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1018 09:11:53.539142  227604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-548249
	I1018 09:11:53.559223  227604 main.go:141] libmachine: Using SSH client type: native
	I1018 09:11:53.559505  227604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33048 <nil> <nil>}
	I1018 09:11:53.559521  227604 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:11:53.816840  227604 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:11:53.816854  227604 machine.go:96] duration metric: took 4.418619285s to provisionDockerMachine
	I1018 09:11:53.816864  227604 start.go:293] postStartSetup for "NoKubernetes-548249" (driver="docker")
	I1018 09:11:53.816872  227604 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:11:53.816919  227604 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:11:53.816948  227604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-548249
	I1018 09:11:53.837299  227604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/NoKubernetes-548249/id_rsa Username:docker}
	I1018 09:11:53.936298  227604 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:11:53.940355  227604 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 09:11:53.940380  227604 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 09:11:53.940393  227604 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-5897/.minikube/addons for local assets ...
	I1018 09:11:53.940445  227604 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-5897/.minikube/files for local assets ...
	I1018 09:11:53.940527  227604 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem -> 93942.pem in /etc/ssl/certs
	I1018 09:11:53.940614  227604 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:11:53.953527  227604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem --> /etc/ssl/certs/93942.pem (1708 bytes)
	I1018 09:11:53.975123  227604 start.go:296] duration metric: took 158.244807ms for postStartSetup
	I1018 09:11:53.975216  227604 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:11:53.975246  227604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-548249
	I1018 09:11:53.994884  227604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/NoKubernetes-548249/id_rsa Username:docker}
	I1018 09:11:54.091538  227604 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 09:11:54.096575  227604 fix.go:56] duration metric: took 5.03382396s for fixHost
	I1018 09:11:54.096594  227604 start.go:83] releasing machines lock for "NoKubernetes-548249", held for 5.033862437s
	I1018 09:11:54.096670  227604 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-548249
	I1018 09:11:54.119952  227604 ssh_runner.go:195] Run: cat /version.json
	I1018 09:11:54.120005  227604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-548249
	I1018 09:11:54.120021  227604 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:11:54.120089  227604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-548249
	I1018 09:11:54.142736  227604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/NoKubernetes-548249/id_rsa Username:docker}
	I1018 09:11:54.143078  227604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/NoKubernetes-548249/id_rsa Username:docker}
	I1018 09:11:54.301849  227604 ssh_runner.go:195] Run: systemctl --version
	I1018 09:11:54.309557  227604 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:11:54.346026  227604 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:11:54.351327  227604 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:11:54.351420  227604 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:11:54.360409  227604 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 09:11:54.360439  227604 start.go:495] detecting cgroup driver to use...
	I1018 09:11:54.360478  227604 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 09:11:54.360532  227604 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:11:54.379631  227604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:11:54.395393  227604 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:11:54.395445  227604 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:11:54.416443  227604 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:11:54.431783  227604 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:11:54.541556  227604 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:11:54.642629  227604 docker.go:234] disabling docker service ...
	I1018 09:11:54.642679  227604 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:11:54.659169  227604 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:11:54.672725  227604 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:11:54.756489  227604 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:11:54.839939  227604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:11:54.853258  227604 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:11:54.868618  227604 download.go:108] Downloading: https://dl.k8s.io/release/v0.0.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v0.0.0/bin/linux/amd64/kubeadm.sha1 -> /home/jenkins/minikube-integration/21767-5897/.minikube/cache/linux/amd64/v0.0.0/kubeadm
	I1018 09:11:55.062644  227604 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1018 09:11:55.062720  227604 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:11:55.077216  227604 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 09:11:55.077267  227604 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:11:55.087707  227604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:11:55.097254  227604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:11:55.106960  227604 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:11:55.115877  227604 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:11:55.123991  227604 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:11:55.132150  227604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:11:55.220398  227604 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:11:55.337632  227604 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:11:55.337706  227604 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:11:55.342490  227604 start.go:563] Will wait 60s for crictl version
	I1018 09:11:55.342556  227604 ssh_runner.go:195] Run: which crictl
	I1018 09:11:55.346684  227604 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 09:11:55.373430  227604 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 09:11:55.373520  227604 ssh_runner.go:195] Run: crio --version
	I1018 09:11:55.405816  227604 ssh_runner.go:195] Run: crio --version
	I1018 09:11:55.438196  227604 out.go:179] * Preparing CRI-O 1.34.1 ...
	I1018 09:11:55.439797  227604 ssh_runner.go:195] Run: rm -f paused
	I1018 09:11:55.445624  227604 out.go:179] * Done! minikube is ready without Kubernetes!
	I1018 09:11:55.449224  227604 out.go:203] ╭───────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                       │
	│                        * Things to try without Kubernetes ...                         │
	│                                                                                       │
	│    - "minikube ssh" to SSH into minikube's node.                                      │
	│    - "minikube podman-env" to point your podman-cli to the podman inside minikube.    │
	│    - "minikube image" to build images without docker.                                 │
	│                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────╯
	I1018 09:11:50.921956  189686 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:11:50.922565  189686 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:11:50.922628  189686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 09:11:50.922688  189686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 09:11:50.956372  189686 cri.go:89] found id: "7027c49f3b8f3e631a42588b854b959dc67b4015fbdcf089da949a1930646b4b"
	I1018 09:11:50.956395  189686 cri.go:89] found id: ""
	I1018 09:11:50.956404  189686 logs.go:282] 1 containers: [7027c49f3b8f3e631a42588b854b959dc67b4015fbdcf089da949a1930646b4b]
	I1018 09:11:50.956463  189686 ssh_runner.go:195] Run: which crictl
	I1018 09:11:50.960453  189686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 09:11:50.960537  189686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 09:11:50.991133  189686 cri.go:89] found id: ""
	I1018 09:11:50.991155  189686 logs.go:282] 0 containers: []
	W1018 09:11:50.991162  189686 logs.go:284] No container was found matching "etcd"
	I1018 09:11:50.991168  189686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 09:11:50.991213  189686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 09:11:51.020511  189686 cri.go:89] found id: ""
	I1018 09:11:51.020543  189686 logs.go:282] 0 containers: []
	W1018 09:11:51.020551  189686 logs.go:284] No container was found matching "coredns"
	I1018 09:11:51.020560  189686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 09:11:51.020627  189686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 09:11:51.051163  189686 cri.go:89] found id: "86f01e214f69dadfddb8202ec7a69d76aaed7800772fd9291732c69639a6ff2e"
	I1018 09:11:51.051193  189686 cri.go:89] found id: ""
	I1018 09:11:51.051203  189686 logs.go:282] 1 containers: [86f01e214f69dadfddb8202ec7a69d76aaed7800772fd9291732c69639a6ff2e]
	I1018 09:11:51.051264  189686 ssh_runner.go:195] Run: which crictl
	I1018 09:11:51.055119  189686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 09:11:51.055195  189686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 09:11:51.083991  189686 cri.go:89] found id: ""
	I1018 09:11:51.084023  189686 logs.go:282] 0 containers: []
	W1018 09:11:51.084032  189686 logs.go:284] No container was found matching "kube-proxy"
	I1018 09:11:51.084038  189686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 09:11:51.084097  189686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 09:11:51.112498  189686 cri.go:89] found id: "20ce98cee7a65c567d8e5cab3d913b60c862afd25295d419c93e4cdab6c05312"
	I1018 09:11:51.112529  189686 cri.go:89] found id: ""
	I1018 09:11:51.112540  189686 logs.go:282] 1 containers: [20ce98cee7a65c567d8e5cab3d913b60c862afd25295d419c93e4cdab6c05312]
	I1018 09:11:51.112599  189686 ssh_runner.go:195] Run: which crictl
	I1018 09:11:51.116672  189686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 09:11:51.116737  189686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 09:11:51.146899  189686 cri.go:89] found id: ""
	I1018 09:11:51.146923  189686 logs.go:282] 0 containers: []
	W1018 09:11:51.146933  189686 logs.go:284] No container was found matching "kindnet"
	I1018 09:11:51.146940  189686 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 09:11:51.146997  189686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 09:11:51.175436  189686 cri.go:89] found id: ""
	I1018 09:11:51.175463  189686 logs.go:282] 0 containers: []
	W1018 09:11:51.175474  189686 logs.go:284] No container was found matching "storage-provisioner"
	I1018 09:11:51.175484  189686 logs.go:123] Gathering logs for kube-scheduler [86f01e214f69dadfddb8202ec7a69d76aaed7800772fd9291732c69639a6ff2e] ...
	I1018 09:11:51.175499  189686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 86f01e214f69dadfddb8202ec7a69d76aaed7800772fd9291732c69639a6ff2e"
	I1018 09:11:51.224426  189686 logs.go:123] Gathering logs for kube-controller-manager [20ce98cee7a65c567d8e5cab3d913b60c862afd25295d419c93e4cdab6c05312] ...
	I1018 09:11:51.224469  189686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 20ce98cee7a65c567d8e5cab3d913b60c862afd25295d419c93e4cdab6c05312"
	I1018 09:11:51.255199  189686 logs.go:123] Gathering logs for CRI-O ...
	I1018 09:11:51.255231  189686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 09:11:51.305975  189686 logs.go:123] Gathering logs for container status ...
	I1018 09:11:51.306011  189686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 09:11:51.339266  189686 logs.go:123] Gathering logs for kubelet ...
	I1018 09:11:51.339302  189686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 09:11:51.427620  189686 logs.go:123] Gathering logs for dmesg ...
	I1018 09:11:51.427648  189686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 09:11:51.442267  189686 logs.go:123] Gathering logs for describe nodes ...
	I1018 09:11:51.442299  189686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 09:11:51.503030  189686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 09:11:51.503060  189686 logs.go:123] Gathering logs for kube-apiserver [7027c49f3b8f3e631a42588b854b959dc67b4015fbdcf089da949a1930646b4b] ...
	I1018 09:11:51.503072  189686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7027c49f3b8f3e631a42588b854b959dc67b4015fbdcf089da949a1930646b4b"
	I1018 09:11:54.037420  189686 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:11:54.037817  189686 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:11:54.037861  189686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 09:11:54.037942  189686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 09:11:54.067757  189686 cri.go:89] found id: "7027c49f3b8f3e631a42588b854b959dc67b4015fbdcf089da949a1930646b4b"
	I1018 09:11:54.067780  189686 cri.go:89] found id: ""
	I1018 09:11:54.067788  189686 logs.go:282] 1 containers: [7027c49f3b8f3e631a42588b854b959dc67b4015fbdcf089da949a1930646b4b]
	I1018 09:11:54.067840  189686 ssh_runner.go:195] Run: which crictl
	I1018 09:11:54.071673  189686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 09:11:54.071732  189686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 09:11:54.102955  189686 cri.go:89] found id: ""
	I1018 09:11:54.102982  189686 logs.go:282] 0 containers: []
	W1018 09:11:54.102993  189686 logs.go:284] No container was found matching "etcd"
	I1018 09:11:54.103001  189686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 09:11:54.103059  189686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 09:11:54.136134  189686 cri.go:89] found id: ""
	I1018 09:11:54.136161  189686 logs.go:282] 0 containers: []
	W1018 09:11:54.136171  189686 logs.go:284] No container was found matching "coredns"
	I1018 09:11:54.136178  189686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 09:11:54.136334  189686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 09:11:54.168580  189686 cri.go:89] found id: "86f01e214f69dadfddb8202ec7a69d76aaed7800772fd9291732c69639a6ff2e"
	I1018 09:11:54.168601  189686 cri.go:89] found id: ""
	I1018 09:11:54.168610  189686 logs.go:282] 1 containers: [86f01e214f69dadfddb8202ec7a69d76aaed7800772fd9291732c69639a6ff2e]
	I1018 09:11:54.168666  189686 ssh_runner.go:195] Run: which crictl
	I1018 09:11:54.172964  189686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 09:11:54.173030  189686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 09:11:54.202369  189686 cri.go:89] found id: ""
	I1018 09:11:54.202394  189686 logs.go:282] 0 containers: []
	W1018 09:11:54.202403  189686 logs.go:284] No container was found matching "kube-proxy"
	I1018 09:11:54.202416  189686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 09:11:54.202466  189686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 09:11:54.229539  189686 cri.go:89] found id: "20ce98cee7a65c567d8e5cab3d913b60c862afd25295d419c93e4cdab6c05312"
	I1018 09:11:54.229564  189686 cri.go:89] found id: ""
	I1018 09:11:54.229574  189686 logs.go:282] 1 containers: [20ce98cee7a65c567d8e5cab3d913b60c862afd25295d419c93e4cdab6c05312]
	I1018 09:11:54.229624  189686 ssh_runner.go:195] Run: which crictl
	I1018 09:11:54.234650  189686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 09:11:54.234719  189686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 09:11:54.263596  189686 cri.go:89] found id: ""
	I1018 09:11:54.263626  189686 logs.go:282] 0 containers: []
	W1018 09:11:54.263636  189686 logs.go:284] No container was found matching "kindnet"
	I1018 09:11:54.263644  189686 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 09:11:54.263705  189686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 09:11:54.292652  189686 cri.go:89] found id: ""
	I1018 09:11:54.292680  189686 logs.go:282] 0 containers: []
	W1018 09:11:54.292688  189686 logs.go:284] No container was found matching "storage-provisioner"
	I1018 09:11:54.292697  189686 logs.go:123] Gathering logs for dmesg ...
	I1018 09:11:54.292709  189686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 09:11:54.308036  189686 logs.go:123] Gathering logs for describe nodes ...
	I1018 09:11:54.308068  189686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 09:11:54.373766  189686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 09:11:54.373793  189686 logs.go:123] Gathering logs for kube-apiserver [7027c49f3b8f3e631a42588b854b959dc67b4015fbdcf089da949a1930646b4b] ...
	I1018 09:11:54.373813  189686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7027c49f3b8f3e631a42588b854b959dc67b4015fbdcf089da949a1930646b4b"
	I1018 09:11:54.415826  189686 logs.go:123] Gathering logs for kube-scheduler [86f01e214f69dadfddb8202ec7a69d76aaed7800772fd9291732c69639a6ff2e] ...
	I1018 09:11:54.415869  189686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 86f01e214f69dadfddb8202ec7a69d76aaed7800772fd9291732c69639a6ff2e"
	I1018 09:11:54.476788  189686 logs.go:123] Gathering logs for kube-controller-manager [20ce98cee7a65c567d8e5cab3d913b60c862afd25295d419c93e4cdab6c05312] ...
	I1018 09:11:54.476827  189686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 20ce98cee7a65c567d8e5cab3d913b60c862afd25295d419c93e4cdab6c05312"
	I1018 09:11:54.504731  189686 logs.go:123] Gathering logs for CRI-O ...
	I1018 09:11:54.504759  189686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 09:11:54.558946  189686 logs.go:123] Gathering logs for container status ...
	I1018 09:11:54.558983  189686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 09:11:54.601748  189686 logs.go:123] Gathering logs for kubelet ...
	I1018 09:11:54.601777  189686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	
	
	==> CRI-O <==
	Oct 18 09:11:50 pause-182020 crio[2219]: time="2025-10-18T09:11:50.55821105Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 18 09:11:50 pause-182020 crio[2219]: time="2025-10-18T09:11:50.559041721Z" level=info msg="Conmon does support the --sync option"
	Oct 18 09:11:50 pause-182020 crio[2219]: time="2025-10-18T09:11:50.559059641Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 18 09:11:50 pause-182020 crio[2219]: time="2025-10-18T09:11:50.559076277Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 18 09:11:50 pause-182020 crio[2219]: time="2025-10-18T09:11:50.55986892Z" level=info msg="Conmon does support the --sync option"
	Oct 18 09:11:50 pause-182020 crio[2219]: time="2025-10-18T09:11:50.559886212Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 18 09:11:50 pause-182020 crio[2219]: time="2025-10-18T09:11:50.563690164Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 09:11:50 pause-182020 crio[2219]: time="2025-10-18T09:11:50.563722403Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 09:11:50 pause-182020 crio[2219]: time="2025-10-18T09:11:50.564310891Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = true\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/c
ni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"/
var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Oct 18 09:11:50 pause-182020 crio[2219]: time="2025-10-18T09:11:50.564709336Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Oct 18 09:11:50 pause-182020 crio[2219]: time="2025-10-18T09:11:50.564779386Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Oct 18 09:11:50 pause-182020 crio[2219]: time="2025-10-18T09:11:50.570578614Z" level=info msg="No kernel support for IPv6: could not find nftables binary: exec: \"nft\": executable file not found in $PATH"
	Oct 18 09:11:50 pause-182020 crio[2219]: time="2025-10-18T09:11:50.610937783Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-s4g4q Namespace:kube-system ID:9ff6d7c752170d3d3c82b2a67b922755e3af1a0d48eeb2dbb014ba305c5a84bb UID:2a6a8dd2-6620-4f70-8754-7205c4c93f06 NetNS:/var/run/netns/32668cb7-e2a1-47f6-aaed-cffc9df42a2e Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0001280f8}] Aliases:map[]}"
	Oct 18 09:11:50 pause-182020 crio[2219]: time="2025-10-18T09:11:50.61111723Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-s4g4q for CNI network kindnet (type=ptp)"
	Oct 18 09:11:50 pause-182020 crio[2219]: time="2025-10-18T09:11:50.611574313Z" level=info msg="Registered SIGHUP reload watcher"
	Oct 18 09:11:50 pause-182020 crio[2219]: time="2025-10-18T09:11:50.611604121Z" level=info msg="Starting seccomp notifier watcher"
	Oct 18 09:11:50 pause-182020 crio[2219]: time="2025-10-18T09:11:50.611647108Z" level=info msg="Create NRI interface"
	Oct 18 09:11:50 pause-182020 crio[2219]: time="2025-10-18T09:11:50.611720926Z" level=info msg="built-in NRI default validator is disabled"
	Oct 18 09:11:50 pause-182020 crio[2219]: time="2025-10-18T09:11:50.611726898Z" level=info msg="runtime interface created"
	Oct 18 09:11:50 pause-182020 crio[2219]: time="2025-10-18T09:11:50.611737143Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Oct 18 09:11:50 pause-182020 crio[2219]: time="2025-10-18T09:11:50.611742903Z" level=info msg="runtime interface starting up..."
	Oct 18 09:11:50 pause-182020 crio[2219]: time="2025-10-18T09:11:50.611748323Z" level=info msg="starting plugins..."
	Oct 18 09:11:50 pause-182020 crio[2219]: time="2025-10-18T09:11:50.611758564Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Oct 18 09:11:50 pause-182020 crio[2219]: time="2025-10-18T09:11:50.612050519Z" level=info msg="No systemd watchdog enabled"
	Oct 18 09:11:50 pause-182020 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	f5ca822f709a1       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   12 seconds ago       Running             coredns                   0                   9ff6d7c752170       coredns-66bc5c9577-s4g4q               kube-system
	d7a65a8f4fe93       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   53 seconds ago       Running             kube-proxy                0                   8ef1dc84e06f9       kube-proxy-zlxhp                       kube-system
	eaf98db334b90       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   53 seconds ago       Running             kindnet-cni               0                   879525b712465       kindnet-kbtnf                          kube-system
	8807016bdc6e7       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   About a minute ago   Running             kube-controller-manager   0                   005bb2ee16637       kube-controller-manager-pause-182020   kube-system
	04e5f45aeb885       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   About a minute ago   Running             kube-scheduler            0                   8c8cc334cacc0       kube-scheduler-pause-182020            kube-system
	12a42de0cfa9b       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   About a minute ago   Running             kube-apiserver            0                   777b8eebef534       kube-apiserver-pause-182020            kube-system
	d92c7b180dca0       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   About a minute ago   Running             etcd                      0                   07d56c9ba7ddd       etcd-pause-182020                      kube-system
	
	
	==> coredns [f5ca822f709a1b9798c695190a4248e26ef175c5318934a88c07b7b1904bff04] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42798 - 49055 "HINFO IN 7574793521949092101.1276386647293361466. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.091090041s
	
	
	==> describe nodes <==
	Name:               pause-182020
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-182020
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a39cecdc22b5fb611b15c7501c7459c3b4d2820
	                    minikube.k8s.io/name=pause-182020
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T09_10_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 09:10:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-182020
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 09:11:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 09:11:45 +0000   Sat, 18 Oct 2025 09:10:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 09:11:45 +0000   Sat, 18 Oct 2025 09:10:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 09:11:45 +0000   Sat, 18 Oct 2025 09:10:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 09:11:45 +0000   Sat, 18 Oct 2025 09:11:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    pause-182020
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                d958dd9e-0008-4313-9519-339af1cb971b
	  Boot ID:                    e8d7ef1f-87bb-488c-8381-e18fe85b484f
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-s4g4q                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     54s
	  kube-system                 etcd-pause-182020                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         60s
	  kube-system                 kindnet-kbtnf                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      54s
	  kube-system                 kube-apiserver-pause-182020             250m (3%)     0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-controller-manager-pause-182020    200m (2%)     0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-proxy-zlxhp                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kube-system                 kube-scheduler-pause-182020             100m (1%)     0 (0%)      0 (0%)           0 (0%)         60s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 53s   kube-proxy       
	  Normal  Starting                 60s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s   kubelet          Node pause-182020 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s   kubelet          Node pause-182020 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s   kubelet          Node pause-182020 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           56s   node-controller  Node pause-182020 event: Registered Node pause-182020 in Controller
	  Normal  NodeReady                13s   kubelet          Node pause-182020 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.101295] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028366] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.196963] kauditd_printk_skb: 47 callbacks suppressed
	[Oct18 08:32] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 0e c2 2a eb 06 e6 66 0c bb bb 22 50 08 00
	[  +1.012248] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 0e c2 2a eb 06 e6 66 0c bb bb 22 50 08 00
	[  +1.023893] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 0e c2 2a eb 06 e6 66 0c bb bb 22 50 08 00
	[  +1.023849] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 0e c2 2a eb 06 e6 66 0c bb bb 22 50 08 00
	[  +1.023875] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 0e c2 2a eb 06 e6 66 0c bb bb 22 50 08 00
	[  +1.024040] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 0e c2 2a eb 06 e6 66 0c bb bb 22 50 08 00
	[  +2.047589] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 0e c2 2a eb 06 e6 66 0c bb bb 22 50 08 00
	[  +4.031586] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 0e c2 2a eb 06 e6 66 0c bb bb 22 50 08 00
	[  +8.255150] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 0e c2 2a eb 06 e6 66 0c bb bb 22 50 08 00
	[ +16.382250] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000028] ll header: 00000000: 0e c2 2a eb 06 e6 66 0c bb bb 22 50 08 00
	[Oct18 08:33] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 0e c2 2a eb 06 e6 66 0c bb bb 22 50 08 00
	
	
	==> etcd [d92c7b180dca0476851d7f6127a563b17c31e334cb49a9c36a31c2355cb3eeae] <==
	{"level":"warn","ts":"2025-10-18T09:10:55.443861Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:10:55.494118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:11:04.602368Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"146.268593ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/coredns\" limit:1 ","response":"range_response_count:1 size:693"}
	{"level":"info","ts":"2025-10-18T09:11:04.602474Z","caller":"traceutil/trace.go:172","msg":"trace[1960749243] range","detail":"{range_begin:/registry/configmaps/kube-system/coredns; range_end:; response_count:1; response_revision:350; }","duration":"146.438968ms","start":"2025-10-18T09:11:04.456017Z","end":"2025-10-18T09:11:04.602456Z","steps":["trace[1960749243] 'range keys from in-memory index tree'  (duration: 146.11788ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:11:04.824107Z","caller":"traceutil/trace.go:172","msg":"trace[871428486] transaction","detail":"{read_only:false; response_revision:353; number_of_response:1; }","duration":"150.696184ms","start":"2025-10-18T09:11:04.673393Z","end":"2025-10-18T09:11:04.824089Z","steps":["trace[871428486] 'process raft request'  (duration: 150.590199ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:11:04.969504Z","caller":"traceutil/trace.go:172","msg":"trace[862003141] transaction","detail":"{read_only:false; response_revision:356; number_of_response:1; }","duration":"124.652736ms","start":"2025-10-18T09:11:04.844831Z","end":"2025-10-18T09:11:04.969483Z","steps":["trace[862003141] 'process raft request'  (duration: 124.462649ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T09:11:05.237930Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"130.454721ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873789411450250622 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/coredns-66bc5c9577-qfkjs\" mod_revision:344 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-66bc5c9577-qfkjs\" value_size:4295 >> failure:<request_range:<key:\"/registry/pods/kube-system/coredns-66bc5c9577-qfkjs\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-18T09:11:05.238104Z","caller":"traceutil/trace.go:172","msg":"trace[1918107826] transaction","detail":"{read_only:false; response_revision:361; number_of_response:1; }","duration":"209.889898ms","start":"2025-10-18T09:11:05.028202Z","end":"2025-10-18T09:11:05.238092Z","steps":["trace[1918107826] 'process raft request'  (duration: 209.822124ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:11:05.238121Z","caller":"traceutil/trace.go:172","msg":"trace[498431804] transaction","detail":"{read_only:false; response_revision:360; number_of_response:1; }","duration":"211.489175ms","start":"2025-10-18T09:11:05.026611Z","end":"2025-10-18T09:11:05.238100Z","steps":["trace[498431804] 'process raft request'  (duration: 80.379904ms)","trace[498431804] 'compare'  (duration: 130.29806ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T09:11:05.404536Z","caller":"traceutil/trace.go:172","msg":"trace[671931606] transaction","detail":"{read_only:false; number_of_response:1; response_revision:362; }","duration":"165.129764ms","start":"2025-10-18T09:11:05.239389Z","end":"2025-10-18T09:11:05.404519Z","steps":["trace[671931606] 'process raft request'  (duration: 127.044853ms)","trace[671931606] 'compare'  (duration: 38.002835ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T09:11:05.408703Z","caller":"traceutil/trace.go:172","msg":"trace[1161089971] transaction","detail":"{read_only:false; response_revision:363; number_of_response:1; }","duration":"164.711156ms","start":"2025-10-18T09:11:05.243970Z","end":"2025-10-18T09:11:05.408681Z","steps":["trace[1161089971] 'process raft request'  (duration: 164.60829ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T09:11:05.674579Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"141.145155ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873789411450250627 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/coredns-66bc5c9577.186f8addff9780d1\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/coredns-66bc5c9577.186f8addff9780d1\" value_size:622 lease:4650417374595474502 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-10-18T09:11:05.674879Z","caller":"traceutil/trace.go:172","msg":"trace[1382970621] transaction","detail":"{read_only:false; response_revision:364; number_of_response:1; }","duration":"267.378682ms","start":"2025-10-18T09:11:05.407479Z","end":"2025-10-18T09:11:05.674858Z","steps":["trace[1382970621] 'process raft request'  (duration: 125.894638ms)","trace[1382970621] 'compare'  (duration: 141.028875ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T09:11:05.674939Z","caller":"traceutil/trace.go:172","msg":"trace[323921766] transaction","detail":"{read_only:false; response_revision:365; number_of_response:1; }","duration":"265.344732ms","start":"2025-10-18T09:11:05.409582Z","end":"2025-10-18T09:11:05.674927Z","steps":["trace[323921766] 'process raft request'  (duration: 265.261095ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:11:05.888901Z","caller":"traceutil/trace.go:172","msg":"trace[1989917116] transaction","detail":"{read_only:false; response_revision:367; number_of_response:1; }","duration":"199.958561ms","start":"2025-10-18T09:11:05.688921Z","end":"2025-10-18T09:11:05.888880Z","steps":["trace[1989917116] 'process raft request'  (duration: 199.899084ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:11:05.889013Z","caller":"traceutil/trace.go:172","msg":"trace[1079851862] transaction","detail":"{read_only:false; response_revision:366; number_of_response:1; }","duration":"204.559158ms","start":"2025-10-18T09:11:05.684440Z","end":"2025-10-18T09:11:05.889000Z","steps":["trace[1079851862] 'process raft request'  (duration: 198.55483ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T09:11:06.258017Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"327.428878ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/kube-system/coredns-66bc5c9577\" limit:1 ","response":"range_response_count:1 size:4101"}
	{"level":"info","ts":"2025-10-18T09:11:06.258172Z","caller":"traceutil/trace.go:172","msg":"trace[1055821814] range","detail":"{range_begin:/registry/replicasets/kube-system/coredns-66bc5c9577; range_end:; response_count:1; response_revision:368; }","duration":"327.589826ms","start":"2025-10-18T09:11:05.930562Z","end":"2025-10-18T09:11:06.258152Z","steps":["trace[1055821814] 'agreement among raft nodes before linearized reading'  (duration: 89.808008ms)","trace[1055821814] 'range keys from in-memory index tree'  (duration: 237.574143ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T09:11:06.258238Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-18T09:11:05.930547Z","time spent":"327.658605ms","remote":"127.0.0.1:38916","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":1,"response size":4124,"request content":"key:\"/registry/replicasets/kube-system/coredns-66bc5c9577\" limit:1 "}
	{"level":"warn","ts":"2025-10-18T09:11:06.258267Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"237.795404ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873789411450250639 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:363 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:4336 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-18T09:11:06.258492Z","caller":"traceutil/trace.go:172","msg":"trace[1721373907] transaction","detail":"{read_only:false; response_revision:370; number_of_response:1; }","duration":"359.274354ms","start":"2025-10-18T09:11:05.899207Z","end":"2025-10-18T09:11:06.258482Z","steps":["trace[1721373907] 'process raft request'  (duration: 359.12572ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:11:06.258551Z","caller":"traceutil/trace.go:172","msg":"trace[1572380880] transaction","detail":"{read_only:false; response_revision:369; number_of_response:1; }","duration":"362.211074ms","start":"2025-10-18T09:11:05.896322Z","end":"2025-10-18T09:11:06.258533Z","steps":["trace[1572380880] 'process raft request'  (duration: 124.071174ms)","trace[1572380880] 'compare'  (duration: 237.563654ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T09:11:06.258632Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-18T09:11:05.896303Z","time spent":"362.286633ms","remote":"127.0.0.1:38868","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4385,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:363 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:4336 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >"}
	{"level":"warn","ts":"2025-10-18T09:11:06.258566Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-18T09:11:05.899188Z","time spent":"359.338027ms","remote":"127.0.0.1:38290","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5389,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kindnet-kbtnf\" mod_revision:335 > success:<request_put:<key:\"/registry/pods/kube-system/kindnet-kbtnf\" value_size:5341 >> failure:<request_range:<key:\"/registry/pods/kube-system/kindnet-kbtnf\" > >"}
	{"level":"info","ts":"2025-10-18T09:11:06.556547Z","caller":"traceutil/trace.go:172","msg":"trace[952638412] transaction","detail":"{read_only:false; response_revision:372; number_of_response:1; }","duration":"123.04278ms","start":"2025-10-18T09:11:06.433487Z","end":"2025-10-18T09:11:06.556529Z","steps":["trace[952638412] 'process raft request'  (duration: 120.423906ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:11:58 up 54 min,  0 user,  load average: 2.47, 2.58, 1.75
	Linux pause-182020 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [eaf98db334b9006e58ac3d2f70ad88ace993203f90b34acea96e2f6ddfbeaec8] <==
	I1018 09:11:04.753413       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 09:11:04.753711       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1018 09:11:04.753859       1 main.go:148] setting mtu 1500 for CNI 
	I1018 09:11:04.753876       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 09:11:04.753896       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T09:11:04Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 09:11:04.953548       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 09:11:04.953570       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 09:11:04.953582       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 09:11:04.953689       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1018 09:11:34.953864       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1018 09:11:34.953974       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1018 09:11:34.954204       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1018 09:11:34.954324       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1018 09:11:36.553727       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 09:11:36.553761       1 metrics.go:72] Registering metrics
	I1018 09:11:36.553826       1 controller.go:711] "Syncing nftables rules"
	I1018 09:11:44.959477       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1018 09:11:44.959532       1 main.go:301] handling current node
	I1018 09:11:54.961458       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1018 09:11:54.961494       1 main.go:301] handling current node
	
	
	==> kube-apiserver [12a42de0cfa9baf1a98897014767f47b63a1c6ae213ffaf923be29be984231c3] <==
	I1018 09:10:56.025587       1 policy_source.go:240] refreshing policies
	E1018 09:10:56.064805       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1018 09:10:56.110958       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 09:10:56.114586       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:10:56.114855       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1018 09:10:56.120832       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 09:10:56.120909       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:10:56.228155       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 09:10:56.914593       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1018 09:10:56.919307       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1018 09:10:56.919328       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 09:10:57.562984       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 09:10:57.607150       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 09:10:57.718086       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1018 09:10:57.724098       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1018 09:10:57.725215       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 09:10:57.729710       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 09:10:57.940797       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 09:10:58.734854       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 09:10:58.746210       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1018 09:10:58.756623       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 09:11:03.693224       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:11:03.698185       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:11:03.792197       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 09:11:03.993550       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [8807016bdc6e7d79c7d0284d142aa731ef1ac4dc315dba67d96ef49d194d63d2] <==
	I1018 09:11:02.928392       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 09:11:02.937869       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 09:11:02.939076       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 09:11:02.939113       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 09:11:02.939182       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 09:11:02.939239       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 09:11:02.939272       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1018 09:11:02.939315       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 09:11:02.939315       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 09:11:02.939464       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1018 09:11:02.939468       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1018 09:11:02.939573       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 09:11:02.939582       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 09:11:02.939643       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 09:11:02.939798       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 09:11:02.940040       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 09:11:02.940077       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 09:11:02.940397       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1018 09:11:02.940417       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 09:11:02.944650       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 09:11:02.950684       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:11:02.951965       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 09:11:02.960035       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 09:11:02.971520       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 09:11:47.895082       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [d7a65a8f4fe938ccc6475ccee4fbc8b3e900a2ff7eddc8b7274ea535e3bfcea8] <==
	I1018 09:11:04.641540       1 server_linux.go:53] "Using iptables proxy"
	I1018 09:11:04.716388       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 09:11:04.817414       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 09:11:04.817453       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1018 09:11:04.817537       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 09:11:04.839578       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 09:11:04.839644       1 server_linux.go:132] "Using iptables Proxier"
	I1018 09:11:04.846234       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 09:11:04.846637       1 server.go:527] "Version info" version="v1.34.1"
	I1018 09:11:04.846667       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:11:04.848057       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 09:11:04.848085       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 09:11:04.848111       1 config.go:200] "Starting service config controller"
	I1018 09:11:04.848116       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 09:11:04.848133       1 config.go:106] "Starting endpoint slice config controller"
	I1018 09:11:04.848138       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 09:11:04.848178       1 config.go:309] "Starting node config controller"
	I1018 09:11:04.848186       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 09:11:04.848192       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 09:11:04.948962       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 09:11:04.949017       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 09:11:04.949026       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [04e5f45aeb8859409b77f6e8ae85fde70680a8b11a8d7d93b61866eee5c7d370] <==
	E1018 09:10:55.985146       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 09:10:55.985322       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 09:10:55.988651       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 09:10:55.989920       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 09:10:55.989939       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 09:10:55.990039       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 09:10:55.990233       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 09:10:55.989939       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 09:10:55.990361       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 09:10:55.990412       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 09:10:55.990460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 09:10:56.830295       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 09:10:56.866281       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 09:10:56.878386       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 09:10:56.957763       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 09:10:56.966216       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1018 09:10:57.033386       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 09:10:57.089460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 09:10:57.100738       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 09:10:57.106289       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 09:10:57.154684       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 09:10:57.247627       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 09:10:57.265488       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 09:10:57.334056       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1018 09:10:59.776396       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 09:10:59 pause-182020 kubelet[1325]: E1018 09:10:59.601955    1325 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-182020\" already exists" pod="kube-system/kube-controller-manager-pause-182020"
	Oct 18 09:10:59 pause-182020 kubelet[1325]: I1018 09:10:59.619863    1325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-182020" podStartSLOduration=1.619788832 podStartE2EDuration="1.619788832s" podCreationTimestamp="2025-10-18 09:10:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:10:59.619440274 +0000 UTC m=+1.138004731" watchObservedRunningTime="2025-10-18 09:10:59.619788832 +0000 UTC m=+1.138353287"
	Oct 18 09:10:59 pause-182020 kubelet[1325]: I1018 09:10:59.643991    1325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-182020" podStartSLOduration=1.6439658769999999 podStartE2EDuration="1.643965877s" podCreationTimestamp="2025-10-18 09:10:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:10:59.631609422 +0000 UTC m=+1.150173889" watchObservedRunningTime="2025-10-18 09:10:59.643965877 +0000 UTC m=+1.162530336"
	Oct 18 09:10:59 pause-182020 kubelet[1325]: I1018 09:10:59.659665    1325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-182020" podStartSLOduration=2.659645183 podStartE2EDuration="2.659645183s" podCreationTimestamp="2025-10-18 09:10:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:10:59.644558749 +0000 UTC m=+1.163123206" watchObservedRunningTime="2025-10-18 09:10:59.659645183 +0000 UTC m=+1.178209640"
	Oct 18 09:10:59 pause-182020 kubelet[1325]: I1018 09:10:59.674092    1325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-182020" podStartSLOduration=1.674067733 podStartE2EDuration="1.674067733s" podCreationTimestamp="2025-10-18 09:10:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:10:59.659943008 +0000 UTC m=+1.178507456" watchObservedRunningTime="2025-10-18 09:10:59.674067733 +0000 UTC m=+1.192632190"
	Oct 18 09:11:02 pause-182020 kubelet[1325]: I1018 09:11:02.907041    1325 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 18 09:11:02 pause-182020 kubelet[1325]: I1018 09:11:02.907879    1325 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 18 09:11:04 pause-182020 kubelet[1325]: I1018 09:11:04.090948    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/89b40206-c23b-47d5-9f2c-e653f39823f8-cni-cfg\") pod \"kindnet-kbtnf\" (UID: \"89b40206-c23b-47d5-9f2c-e653f39823f8\") " pod="kube-system/kindnet-kbtnf"
	Oct 18 09:11:04 pause-182020 kubelet[1325]: I1018 09:11:04.091017    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/89b40206-c23b-47d5-9f2c-e653f39823f8-lib-modules\") pod \"kindnet-kbtnf\" (UID: \"89b40206-c23b-47d5-9f2c-e653f39823f8\") " pod="kube-system/kindnet-kbtnf"
	Oct 18 09:11:04 pause-182020 kubelet[1325]: I1018 09:11:04.091054    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bfck\" (UniqueName: \"kubernetes.io/projected/3cc0ae2c-2ad7-4ec3-9f13-9b19c44124bc-kube-api-access-4bfck\") pod \"kube-proxy-zlxhp\" (UID: \"3cc0ae2c-2ad7-4ec3-9f13-9b19c44124bc\") " pod="kube-system/kube-proxy-zlxhp"
	Oct 18 09:11:04 pause-182020 kubelet[1325]: I1018 09:11:04.091079    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/89b40206-c23b-47d5-9f2c-e653f39823f8-xtables-lock\") pod \"kindnet-kbtnf\" (UID: \"89b40206-c23b-47d5-9f2c-e653f39823f8\") " pod="kube-system/kindnet-kbtnf"
	Oct 18 09:11:04 pause-182020 kubelet[1325]: I1018 09:11:04.091102    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-876l9\" (UniqueName: \"kubernetes.io/projected/89b40206-c23b-47d5-9f2c-e653f39823f8-kube-api-access-876l9\") pod \"kindnet-kbtnf\" (UID: \"89b40206-c23b-47d5-9f2c-e653f39823f8\") " pod="kube-system/kindnet-kbtnf"
	Oct 18 09:11:04 pause-182020 kubelet[1325]: I1018 09:11:04.091413    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3cc0ae2c-2ad7-4ec3-9f13-9b19c44124bc-xtables-lock\") pod \"kube-proxy-zlxhp\" (UID: \"3cc0ae2c-2ad7-4ec3-9f13-9b19c44124bc\") " pod="kube-system/kube-proxy-zlxhp"
	Oct 18 09:11:04 pause-182020 kubelet[1325]: I1018 09:11:04.091489    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3cc0ae2c-2ad7-4ec3-9f13-9b19c44124bc-kube-proxy\") pod \"kube-proxy-zlxhp\" (UID: \"3cc0ae2c-2ad7-4ec3-9f13-9b19c44124bc\") " pod="kube-system/kube-proxy-zlxhp"
	Oct 18 09:11:04 pause-182020 kubelet[1325]: I1018 09:11:04.091527    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3cc0ae2c-2ad7-4ec3-9f13-9b19c44124bc-lib-modules\") pod \"kube-proxy-zlxhp\" (UID: \"3cc0ae2c-2ad7-4ec3-9f13-9b19c44124bc\") " pod="kube-system/kube-proxy-zlxhp"
	Oct 18 09:11:05 pause-182020 kubelet[1325]: I1018 09:11:05.891056    1325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zlxhp" podStartSLOduration=1.891036215 podStartE2EDuration="1.891036215s" podCreationTimestamp="2025-10-18 09:11:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:11:05.890415732 +0000 UTC m=+7.408980190" watchObservedRunningTime="2025-10-18 09:11:05.891036215 +0000 UTC m=+7.409600673"
	Oct 18 09:11:06 pause-182020 kubelet[1325]: I1018 09:11:06.260695    1325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-kbtnf" podStartSLOduration=2.26067086 podStartE2EDuration="2.26067086s" podCreationTimestamp="2025-10-18 09:11:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:11:06.260195569 +0000 UTC m=+7.778760030" watchObservedRunningTime="2025-10-18 09:11:06.26067086 +0000 UTC m=+7.779235318"
	Oct 18 09:11:45 pause-182020 kubelet[1325]: I1018 09:11:45.234354    1325 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 18 09:11:45 pause-182020 kubelet[1325]: I1018 09:11:45.288698    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a6a8dd2-6620-4f70-8754-7205c4c93f06-config-volume\") pod \"coredns-66bc5c9577-s4g4q\" (UID: \"2a6a8dd2-6620-4f70-8754-7205c4c93f06\") " pod="kube-system/coredns-66bc5c9577-s4g4q"
	Oct 18 09:11:45 pause-182020 kubelet[1325]: I1018 09:11:45.288765    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9lql\" (UniqueName: \"kubernetes.io/projected/2a6a8dd2-6620-4f70-8754-7205c4c93f06-kube-api-access-c9lql\") pod \"coredns-66bc5c9577-s4g4q\" (UID: \"2a6a8dd2-6620-4f70-8754-7205c4c93f06\") " pod="kube-system/coredns-66bc5c9577-s4g4q"
	Oct 18 09:11:45 pause-182020 kubelet[1325]: I1018 09:11:45.715501    1325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-s4g4q" podStartSLOduration=41.7154813 podStartE2EDuration="41.7154813s" podCreationTimestamp="2025-10-18 09:11:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:11:45.715213245 +0000 UTC m=+47.233777701" watchObservedRunningTime="2025-10-18 09:11:45.7154813 +0000 UTC m=+47.234045756"
	Oct 18 09:11:54 pause-182020 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 09:11:54 pause-182020 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 09:11:54 pause-182020 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 18 09:11:54 pause-182020 systemd[1]: kubelet.service: Consumed 2.329s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-182020 -n pause-182020
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-182020 -n pause-182020: exit status 2 (349.87254ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-182020 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (5.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-951975 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-951975 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (242.644007ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:15:45Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-951975 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-951975 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-951975 describe deploy/metrics-server -n kube-system: exit status 1 (65.605624ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-951975 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-951975
helpers_test.go:243: (dbg) docker inspect old-k8s-version-951975:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d0100f52d1269537aed479fa34a959a6c66c92a27d1fccddcac8f2b32127e866",
	        "Created": "2025-10-18T09:14:48.164862927Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 280626,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T09:14:48.204261717Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/d0100f52d1269537aed479fa34a959a6c66c92a27d1fccddcac8f2b32127e866/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d0100f52d1269537aed479fa34a959a6c66c92a27d1fccddcac8f2b32127e866/hostname",
	        "HostsPath": "/var/lib/docker/containers/d0100f52d1269537aed479fa34a959a6c66c92a27d1fccddcac8f2b32127e866/hosts",
	        "LogPath": "/var/lib/docker/containers/d0100f52d1269537aed479fa34a959a6c66c92a27d1fccddcac8f2b32127e866/d0100f52d1269537aed479fa34a959a6c66c92a27d1fccddcac8f2b32127e866-json.log",
	        "Name": "/old-k8s-version-951975",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-951975:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-951975",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d0100f52d1269537aed479fa34a959a6c66c92a27d1fccddcac8f2b32127e866",
	                "LowerDir": "/var/lib/docker/overlay2/e7302d3490bbf936dd2f2d552ba2ba9dcf7d4bb0152646a3d6445c572600b324-init/diff:/var/lib/docker/overlay2/76f783f469ac4c930bc111d7df4bd2b3a57bdcd762971c7ce0ba7a7b959771a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e7302d3490bbf936dd2f2d552ba2ba9dcf7d4bb0152646a3d6445c572600b324/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e7302d3490bbf936dd2f2d552ba2ba9dcf7d4bb0152646a3d6445c572600b324/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e7302d3490bbf936dd2f2d552ba2ba9dcf7d4bb0152646a3d6445c572600b324/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-951975",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-951975/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-951975",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-951975",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-951975",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "51e2b0198c171804cf63057014b15fa675b22e6f368a7c026318efa12a3809bf",
	            "SandboxKey": "/var/run/docker/netns/51e2b0198c17",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-951975": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8a:9a:3f:2b:e2:ea",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "24bc48639b258a05e4ef01c1cdad81fb398d660a6740ed3b45a916093c5c2afe",
	                    "EndpointID": "5cfe9b10758f27ef7476f7f33ed3fa504697faf4c46c814cfb67d8a0789bbad6",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-951975",
	                        "d0100f52d126"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-951975 -n old-k8s-version-951975
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-951975 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-951975 logs -n 25: (1.160074617s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                          │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p flannel-448954 sudo systemctl cat kubelet --no-pager                                                                                                │ flannel-448954         │ jenkins │ v1.37.0 │ 18 Oct 25 09:15 UTC │ 18 Oct 25 09:15 UTC │
	│ ssh     │ -p flannel-448954 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                 │ flannel-448954         │ jenkins │ v1.37.0 │ 18 Oct 25 09:15 UTC │ 18 Oct 25 09:15 UTC │
	│ ssh     │ -p flannel-448954 sudo cat /etc/kubernetes/kubelet.conf                                                                                                │ flannel-448954         │ jenkins │ v1.37.0 │ 18 Oct 25 09:15 UTC │ 18 Oct 25 09:15 UTC │
	│ ssh     │ -p flannel-448954 sudo cat /var/lib/kubelet/config.yaml                                                                                                │ flannel-448954         │ jenkins │ v1.37.0 │ 18 Oct 25 09:15 UTC │ 18 Oct 25 09:15 UTC │
	│ ssh     │ -p flannel-448954 sudo systemctl status docker --all --full --no-pager                                                                                 │ flannel-448954         │ jenkins │ v1.37.0 │ 18 Oct 25 09:15 UTC │                     │
	│ ssh     │ -p flannel-448954 sudo systemctl cat docker --no-pager                                                                                                 │ flannel-448954         │ jenkins │ v1.37.0 │ 18 Oct 25 09:15 UTC │ 18 Oct 25 09:15 UTC │
	│ ssh     │ -p flannel-448954 sudo cat /etc/docker/daemon.json                                                                                                     │ flannel-448954         │ jenkins │ v1.37.0 │ 18 Oct 25 09:15 UTC │                     │
	│ ssh     │ -p flannel-448954 sudo docker system info                                                                                                              │ flannel-448954         │ jenkins │ v1.37.0 │ 18 Oct 25 09:15 UTC │                     │
	│ ssh     │ -p flannel-448954 sudo systemctl status cri-docker --all --full --no-pager                                                                             │ flannel-448954         │ jenkins │ v1.37.0 │ 18 Oct 25 09:15 UTC │                     │
	│ ssh     │ -p flannel-448954 sudo systemctl cat cri-docker --no-pager                                                                                             │ flannel-448954         │ jenkins │ v1.37.0 │ 18 Oct 25 09:15 UTC │ 18 Oct 25 09:15 UTC │
	│ ssh     │ -p flannel-448954 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                        │ flannel-448954         │ jenkins │ v1.37.0 │ 18 Oct 25 09:15 UTC │                     │
	│ ssh     │ -p flannel-448954 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                  │ flannel-448954         │ jenkins │ v1.37.0 │ 18 Oct 25 09:15 UTC │ 18 Oct 25 09:15 UTC │
	│ ssh     │ -p flannel-448954 sudo cri-dockerd --version                                                                                                           │ flannel-448954         │ jenkins │ v1.37.0 │ 18 Oct 25 09:15 UTC │ 18 Oct 25 09:15 UTC │
	│ ssh     │ -p flannel-448954 sudo systemctl status containerd --all --full --no-pager                                                                             │ flannel-448954         │ jenkins │ v1.37.0 │ 18 Oct 25 09:15 UTC │                     │
	│ ssh     │ -p flannel-448954 sudo systemctl cat containerd --no-pager                                                                                             │ flannel-448954         │ jenkins │ v1.37.0 │ 18 Oct 25 09:15 UTC │ 18 Oct 25 09:15 UTC │
	│ ssh     │ -p flannel-448954 sudo cat /lib/systemd/system/containerd.service                                                                                      │ flannel-448954         │ jenkins │ v1.37.0 │ 18 Oct 25 09:15 UTC │ 18 Oct 25 09:15 UTC │
	│ ssh     │ -p flannel-448954 sudo cat /etc/containerd/config.toml                                                                                                 │ flannel-448954         │ jenkins │ v1.37.0 │ 18 Oct 25 09:15 UTC │ 18 Oct 25 09:15 UTC │
	│ ssh     │ -p flannel-448954 sudo containerd config dump                                                                                                          │ flannel-448954         │ jenkins │ v1.37.0 │ 18 Oct 25 09:15 UTC │ 18 Oct 25 09:15 UTC │
	│ ssh     │ -p flannel-448954 sudo systemctl status crio --all --full --no-pager                                                                                   │ flannel-448954         │ jenkins │ v1.37.0 │ 18 Oct 25 09:15 UTC │ 18 Oct 25 09:15 UTC │
	│ ssh     │ -p flannel-448954 sudo systemctl cat crio --no-pager                                                                                                   │ flannel-448954         │ jenkins │ v1.37.0 │ 18 Oct 25 09:15 UTC │ 18 Oct 25 09:15 UTC │
	│ ssh     │ -p flannel-448954 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                         │ flannel-448954         │ jenkins │ v1.37.0 │ 18 Oct 25 09:15 UTC │ 18 Oct 25 09:15 UTC │
	│ ssh     │ -p flannel-448954 sudo crio config                                                                                                                     │ flannel-448954         │ jenkins │ v1.37.0 │ 18 Oct 25 09:15 UTC │ 18 Oct 25 09:15 UTC │
	│ delete  │ -p flannel-448954                                                                                                                                      │ flannel-448954         │ jenkins │ v1.37.0 │ 18 Oct 25 09:15 UTC │ 18 Oct 25 09:15 UTC │
	│ start   │ -p embed-certs-880603 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ embed-certs-880603     │ jenkins │ v1.37.0 │ 18 Oct 25 09:15 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-951975 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain           │ old-k8s-version-951975 │ jenkins │ v1.37.0 │ 18 Oct 25 09:15 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 09:15:32
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 09:15:32.008965  295389 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:15:32.009235  295389 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:15:32.009245  295389 out.go:374] Setting ErrFile to fd 2...
	I1018 09:15:32.009253  295389 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:15:32.009574  295389 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-5897/.minikube/bin
	I1018 09:15:32.010217  295389 out.go:368] Setting JSON to false
	I1018 09:15:32.011540  295389 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3480,"bootTime":1760775452,"procs":327,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 09:15:32.011651  295389 start.go:141] virtualization: kvm guest
	I1018 09:15:32.014179  295389 out.go:179] * [embed-certs-880603] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 09:15:32.015497  295389 out.go:179]   - MINIKUBE_LOCATION=21767
	I1018 09:15:32.015499  295389 notify.go:220] Checking for updates...
	I1018 09:15:32.017790  295389 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:15:32.019169  295389 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-5897/kubeconfig
	I1018 09:15:32.020430  295389 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-5897/.minikube
	I1018 09:15:32.021658  295389 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 09:15:32.022996  295389 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:15:32.027124  295389 config.go:182] Loaded profile config "enable-default-cni-448954": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:15:32.027293  295389 config.go:182] Loaded profile config "no-preload-031066": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:15:32.027411  295389 config.go:182] Loaded profile config "old-k8s-version-951975": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1018 09:15:32.027526  295389 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:15:32.054525  295389 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 09:15:32.054611  295389 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:15:32.128651  295389 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:83 SystemTime:2025-10-18 09:15:32.117156244 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:15:32.128759  295389 docker.go:318] overlay module found
	I1018 09:15:32.130483  295389 out.go:179] * Using the docker driver based on user configuration
	I1018 09:15:32.131878  295389 start.go:305] selected driver: docker
	I1018 09:15:32.131900  295389 start.go:925] validating driver "docker" against <nil>
	I1018 09:15:32.131911  295389 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:15:32.132501  295389 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:15:32.197483  295389 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:84 SystemTime:2025-10-18 09:15:32.186550447 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:15:32.197677  295389 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 09:15:32.197911  295389 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:15:32.199614  295389 out.go:179] * Using Docker driver with root privileges
	I1018 09:15:32.201005  295389 cni.go:84] Creating CNI manager for ""
	I1018 09:15:32.201070  295389 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:15:32.201080  295389 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 09:15:32.201153  295389 start.go:349] cluster config:
	{Name:embed-certs-880603 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-880603 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:15:32.202491  295389 out.go:179] * Starting "embed-certs-880603" primary control-plane node in "embed-certs-880603" cluster
	I1018 09:15:32.203770  295389 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 09:15:32.204912  295389 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 09:15:32.205922  295389 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:15:32.205956  295389 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 09:15:32.205963  295389 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-5897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 09:15:32.206050  295389 cache.go:58] Caching tarball of preloaded images
	I1018 09:15:32.206154  295389 preload.go:233] Found /home/jenkins/minikube-integration/21767-5897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 09:15:32.206169  295389 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 09:15:32.206266  295389 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/config.json ...
	I1018 09:15:32.206285  295389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/config.json: {Name:mk23740dd9b3ceb9853336235bb2b2f334ebec71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:15:32.230777  295389 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 09:15:32.230801  295389 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 09:15:32.230821  295389 cache.go:232] Successfully downloaded all kic artifacts
	I1018 09:15:32.230852  295389 start.go:360] acquireMachinesLock for embed-certs-880603: {Name:mkdfbdbf4ee52d14237c1c3c1038142062936208 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:15:32.230972  295389 start.go:364] duration metric: took 100.257µs to acquireMachinesLock for "embed-certs-880603"
	I1018 09:15:32.231005  295389 start.go:93] Provisioning new machine with config: &{Name:embed-certs-880603 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-880603 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:15:32.231091  295389 start.go:125] createHost starting for "" (driver="docker")
	W1018 09:15:28.845071  278283 node_ready.go:57] node "old-k8s-version-951975" has "Ready":"False" status (will retry)
	W1018 09:15:31.344202  278283 node_ready.go:57] node "old-k8s-version-951975" has "Ready":"False" status (will retry)
	I1018 09:15:31.845510  278283 node_ready.go:49] node "old-k8s-version-951975" is "Ready"
	I1018 09:15:31.845542  278283 node_ready.go:38] duration metric: took 14.004731455s for node "old-k8s-version-951975" to be "Ready" ...
	I1018 09:15:31.845559  278283 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:15:31.845616  278283 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:15:31.861586  278283 api_server.go:72] duration metric: took 14.578860164s to wait for apiserver process to appear ...
	I1018 09:15:31.861614  278283 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:15:31.861638  278283 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1018 09:15:31.867174  278283 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1018 09:15:31.868791  278283 api_server.go:141] control plane version: v1.28.0
	I1018 09:15:31.868821  278283 api_server.go:131] duration metric: took 7.19826ms to wait for apiserver health ...
	I1018 09:15:31.868831  278283 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:15:31.874120  278283 system_pods.go:59] 8 kube-system pods found
	I1018 09:15:31.874167  278283 system_pods.go:61] "coredns-5dd5756b68-gwttp" [349d3695-c749-4802-a9eb-53de5ac78c69] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:15:31.874178  278283 system_pods.go:61] "etcd-old-k8s-version-951975" [557b1dae-5b7e-411f-bd2d-47ed28a669e8] Running
	I1018 09:15:31.874187  278283 system_pods.go:61] "kindnet-k2756" [a85dd000-dd96-42a9-bca1-92345ab498da] Running
	I1018 09:15:31.874194  278283 system_pods.go:61] "kube-apiserver-old-k8s-version-951975" [6c6bba36-eef9-430d-9910-6872feda0163] Running
	I1018 09:15:31.874201  278283 system_pods.go:61] "kube-controller-manager-old-k8s-version-951975" [68e333df-a9a4-4bdb-85c5-765aa1551f0d] Running
	I1018 09:15:31.874206  278283 system_pods.go:61] "kube-proxy-rrzqp" [1dbe03c6-5db9-49c5-9016-c421b2d7c581] Running
	I1018 09:15:31.874212  278283 system_pods.go:61] "kube-scheduler-old-k8s-version-951975" [cf36a929-52ed-4029-86b9-610775599e13] Running
	I1018 09:15:31.874221  278283 system_pods.go:61] "storage-provisioner" [201e8ed1-c6b6-4ac6-ada3-291e6b900df8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:15:31.874228  278283 system_pods.go:74] duration metric: took 5.39076ms to wait for pod list to return data ...
	I1018 09:15:31.874245  278283 default_sa.go:34] waiting for default service account to be created ...
	I1018 09:15:31.877787  278283 default_sa.go:45] found service account: "default"
	I1018 09:15:31.877814  278283 default_sa.go:55] duration metric: took 3.562051ms for default service account to be created ...
	I1018 09:15:31.877826  278283 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 09:15:31.885357  278283 system_pods.go:86] 8 kube-system pods found
	I1018 09:15:31.885399  278283 system_pods.go:89] "coredns-5dd5756b68-gwttp" [349d3695-c749-4802-a9eb-53de5ac78c69] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:15:31.885409  278283 system_pods.go:89] "etcd-old-k8s-version-951975" [557b1dae-5b7e-411f-bd2d-47ed28a669e8] Running
	I1018 09:15:31.885420  278283 system_pods.go:89] "kindnet-k2756" [a85dd000-dd96-42a9-bca1-92345ab498da] Running
	I1018 09:15:31.885426  278283 system_pods.go:89] "kube-apiserver-old-k8s-version-951975" [6c6bba36-eef9-430d-9910-6872feda0163] Running
	I1018 09:15:31.885435  278283 system_pods.go:89] "kube-controller-manager-old-k8s-version-951975" [68e333df-a9a4-4bdb-85c5-765aa1551f0d] Running
	I1018 09:15:31.885441  278283 system_pods.go:89] "kube-proxy-rrzqp" [1dbe03c6-5db9-49c5-9016-c421b2d7c581] Running
	I1018 09:15:31.885446  278283 system_pods.go:89] "kube-scheduler-old-k8s-version-951975" [cf36a929-52ed-4029-86b9-610775599e13] Running
	I1018 09:15:31.885461  278283 system_pods.go:89] "storage-provisioner" [201e8ed1-c6b6-4ac6-ada3-291e6b900df8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:15:31.885491  278283 retry.go:31] will retry after 283.843392ms: missing components: kube-dns
	I1018 09:15:32.175401  278283 system_pods.go:86] 8 kube-system pods found
	I1018 09:15:32.175443  278283 system_pods.go:89] "coredns-5dd5756b68-gwttp" [349d3695-c749-4802-a9eb-53de5ac78c69] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:15:32.175451  278283 system_pods.go:89] "etcd-old-k8s-version-951975" [557b1dae-5b7e-411f-bd2d-47ed28a669e8] Running
	I1018 09:15:32.175461  278283 system_pods.go:89] "kindnet-k2756" [a85dd000-dd96-42a9-bca1-92345ab498da] Running
	I1018 09:15:32.175469  278283 system_pods.go:89] "kube-apiserver-old-k8s-version-951975" [6c6bba36-eef9-430d-9910-6872feda0163] Running
	I1018 09:15:32.175476  278283 system_pods.go:89] "kube-controller-manager-old-k8s-version-951975" [68e333df-a9a4-4bdb-85c5-765aa1551f0d] Running
	I1018 09:15:32.175481  278283 system_pods.go:89] "kube-proxy-rrzqp" [1dbe03c6-5db9-49c5-9016-c421b2d7c581] Running
	I1018 09:15:32.175487  278283 system_pods.go:89] "kube-scheduler-old-k8s-version-951975" [cf36a929-52ed-4029-86b9-610775599e13] Running
	I1018 09:15:32.175501  278283 system_pods.go:89] "storage-provisioner" [201e8ed1-c6b6-4ac6-ada3-291e6b900df8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:15:32.175518  278283 retry.go:31] will retry after 293.269633ms: missing components: kube-dns
	I1018 09:15:32.474124  278283 system_pods.go:86] 8 kube-system pods found
	I1018 09:15:32.474167  278283 system_pods.go:89] "coredns-5dd5756b68-gwttp" [349d3695-c749-4802-a9eb-53de5ac78c69] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:15:32.474178  278283 system_pods.go:89] "etcd-old-k8s-version-951975" [557b1dae-5b7e-411f-bd2d-47ed28a669e8] Running
	I1018 09:15:32.474188  278283 system_pods.go:89] "kindnet-k2756" [a85dd000-dd96-42a9-bca1-92345ab498da] Running
	I1018 09:15:32.474194  278283 system_pods.go:89] "kube-apiserver-old-k8s-version-951975" [6c6bba36-eef9-430d-9910-6872feda0163] Running
	I1018 09:15:32.474201  278283 system_pods.go:89] "kube-controller-manager-old-k8s-version-951975" [68e333df-a9a4-4bdb-85c5-765aa1551f0d] Running
	I1018 09:15:32.474206  278283 system_pods.go:89] "kube-proxy-rrzqp" [1dbe03c6-5db9-49c5-9016-c421b2d7c581] Running
	I1018 09:15:32.474213  278283 system_pods.go:89] "kube-scheduler-old-k8s-version-951975" [cf36a929-52ed-4029-86b9-610775599e13] Running
	I1018 09:15:32.474228  278283 system_pods.go:89] "storage-provisioner" [201e8ed1-c6b6-4ac6-ada3-291e6b900df8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:15:32.474254  278283 retry.go:31] will retry after 328.832401ms: missing components: kube-dns
	I1018 09:15:32.808628  278283 system_pods.go:86] 8 kube-system pods found
	I1018 09:15:32.808674  278283 system_pods.go:89] "coredns-5dd5756b68-gwttp" [349d3695-c749-4802-a9eb-53de5ac78c69] Running
	I1018 09:15:32.808684  278283 system_pods.go:89] "etcd-old-k8s-version-951975" [557b1dae-5b7e-411f-bd2d-47ed28a669e8] Running
	I1018 09:15:32.808689  278283 system_pods.go:89] "kindnet-k2756" [a85dd000-dd96-42a9-bca1-92345ab498da] Running
	I1018 09:15:32.808695  278283 system_pods.go:89] "kube-apiserver-old-k8s-version-951975" [6c6bba36-eef9-430d-9910-6872feda0163] Running
	I1018 09:15:32.808711  278283 system_pods.go:89] "kube-controller-manager-old-k8s-version-951975" [68e333df-a9a4-4bdb-85c5-765aa1551f0d] Running
	I1018 09:15:32.808718  278283 system_pods.go:89] "kube-proxy-rrzqp" [1dbe03c6-5db9-49c5-9016-c421b2d7c581] Running
	I1018 09:15:32.808727  278283 system_pods.go:89] "kube-scheduler-old-k8s-version-951975" [cf36a929-52ed-4029-86b9-610775599e13] Running
	I1018 09:15:32.808733  278283 system_pods.go:89] "storage-provisioner" [201e8ed1-c6b6-4ac6-ada3-291e6b900df8] Running
	I1018 09:15:32.808745  278283 system_pods.go:126] duration metric: took 930.909907ms to wait for k8s-apps to be running ...
	I1018 09:15:32.808760  278283 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 09:15:32.808810  278283 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:15:32.824848  278283 system_svc.go:56] duration metric: took 16.07781ms WaitForService to wait for kubelet
	I1018 09:15:32.824880  278283 kubeadm.go:586] duration metric: took 15.542161861s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:15:32.824902  278283 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:15:32.828117  278283 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 09:15:32.828143  278283 node_conditions.go:123] node cpu capacity is 8
	I1018 09:15:32.828159  278283 node_conditions.go:105] duration metric: took 3.252974ms to run NodePressure ...
	I1018 09:15:32.828172  278283 start.go:241] waiting for startup goroutines ...
	I1018 09:15:32.828181  278283 start.go:246] waiting for cluster config update ...
	I1018 09:15:32.828199  278283 start.go:255] writing updated cluster config ...
	I1018 09:15:32.828503  278283 ssh_runner.go:195] Run: rm -f paused
	I1018 09:15:32.833552  278283 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:15:32.838471  278283 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-gwttp" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:15:32.844815  278283 pod_ready.go:94] pod "coredns-5dd5756b68-gwttp" is "Ready"
	I1018 09:15:32.844841  278283 pod_ready.go:86] duration metric: took 6.340697ms for pod "coredns-5dd5756b68-gwttp" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:15:32.848479  278283 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-951975" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:15:32.854136  278283 pod_ready.go:94] pod "etcd-old-k8s-version-951975" is "Ready"
	I1018 09:15:32.854166  278283 pod_ready.go:86] duration metric: took 5.64737ms for pod "etcd-old-k8s-version-951975" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:15:32.860230  278283 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-951975" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:15:32.865472  278283 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-951975" is "Ready"
	I1018 09:15:32.865504  278283 pod_ready.go:86] duration metric: took 5.249944ms for pod "kube-apiserver-old-k8s-version-951975" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:15:32.868862  278283 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-951975" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:15:31.691516  285675 out.go:252]   - Configuring RBAC rules ...
	I1018 09:15:31.691688  285675 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 09:15:31.698230  285675 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 09:15:31.704460  285675 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 09:15:31.707490  285675 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 09:15:31.710509  285675 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 09:15:31.720041  285675 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 09:15:32.044031  285675 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 09:15:32.469144  285675 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 09:15:33.042364  285675 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 09:15:33.043616  285675 kubeadm.go:318] 
	I1018 09:15:33.043732  285675 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 09:15:33.043750  285675 kubeadm.go:318] 
	I1018 09:15:33.043884  285675 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 09:15:33.043902  285675 kubeadm.go:318] 
	I1018 09:15:33.043937  285675 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 09:15:33.044031  285675 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 09:15:33.044113  285675 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 09:15:33.044128  285675 kubeadm.go:318] 
	I1018 09:15:33.044203  285675 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 09:15:33.044212  285675 kubeadm.go:318] 
	I1018 09:15:33.044279  285675 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 09:15:33.044287  285675 kubeadm.go:318] 
	I1018 09:15:33.044391  285675 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 09:15:33.044474  285675 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 09:15:33.044573  285675 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 09:15:33.044590  285675 kubeadm.go:318] 
	I1018 09:15:33.044726  285675 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 09:15:33.044842  285675 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 09:15:33.044858  285675 kubeadm.go:318] 
	I1018 09:15:33.044996  285675 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token dp9bve.11qtsrmx8h95i336 \
	I1018 09:15:33.045154  285675 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:03f732b5d900f8eb7de41cf71a6356f3c4edf03d7a3795a959179e2391e7734f \
	I1018 09:15:33.045213  285675 kubeadm.go:318] 	--control-plane 
	I1018 09:15:33.045224  285675 kubeadm.go:318] 
	I1018 09:15:33.045381  285675 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 09:15:33.045398  285675 kubeadm.go:318] 
	I1018 09:15:33.045516  285675 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token dp9bve.11qtsrmx8h95i336 \
	I1018 09:15:33.045676  285675 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:03f732b5d900f8eb7de41cf71a6356f3c4edf03d7a3795a959179e2391e7734f 
	I1018 09:15:33.048195  285675 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1018 09:15:33.048334  285675 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 09:15:33.048384  285675 cni.go:84] Creating CNI manager for ""
	I1018 09:15:33.048394  285675 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:15:33.050368  285675 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 09:15:33.051650  285675 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 09:15:33.057270  285675 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 09:15:33.057292  285675 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 09:15:33.074946  285675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 09:15:33.337199  285675 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 09:15:33.337280  285675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:15:33.337288  285675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-031066 minikube.k8s.io/updated_at=2025_10_18T09_15_33_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2a39cecdc22b5fb611b15c7501c7459c3b4d2820 minikube.k8s.io/name=no-preload-031066 minikube.k8s.io/primary=true
	I1018 09:15:33.434790  285675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:15:33.438470  285675 ops.go:34] apiserver oom_adj: -16
	I1018 09:15:33.238462  278283 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-951975" is "Ready"
	I1018 09:15:33.238489  278283 pod_ready.go:86] duration metric: took 369.604169ms for pod "kube-controller-manager-old-k8s-version-951975" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:15:33.439364  278283 pod_ready.go:83] waiting for pod "kube-proxy-rrzqp" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:15:33.837915  278283 pod_ready.go:94] pod "kube-proxy-rrzqp" is "Ready"
	I1018 09:15:33.837945  278283 pod_ready.go:86] duration metric: took 398.554496ms for pod "kube-proxy-rrzqp" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:15:34.039441  278283 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-951975" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:15:34.438779  278283 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-951975" is "Ready"
	I1018 09:15:34.438803  278283 pod_ready.go:86] duration metric: took 399.332707ms for pod "kube-scheduler-old-k8s-version-951975" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:15:34.438814  278283 pod_ready.go:40] duration metric: took 1.605221382s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:15:34.490030  278283 start.go:624] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1018 09:15:34.493297  278283 out.go:203] 
	W1018 09:15:34.494858  278283 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1018 09:15:34.496973  278283 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1018 09:15:34.499306  278283 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-951975" cluster and "default" namespace by default
	W1018 09:15:31.354275  275240 pod_ready.go:104] pod "coredns-66bc5c9577-mvszb" is not "Ready", error: <nil>
	W1018 09:15:33.357639  275240 pod_ready.go:104] pod "coredns-66bc5c9577-mvszb" is not "Ready", error: <nil>
	I1018 09:15:32.233405  295389 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1018 09:15:32.233743  295389 start.go:159] libmachine.API.Create for "embed-certs-880603" (driver="docker")
	I1018 09:15:32.233784  295389 client.go:168] LocalClient.Create starting
	I1018 09:15:32.233874  295389 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem
	I1018 09:15:32.233921  295389 main.go:141] libmachine: Decoding PEM data...
	I1018 09:15:32.233942  295389 main.go:141] libmachine: Parsing certificate...
	I1018 09:15:32.234017  295389 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem
	I1018 09:15:32.234053  295389 main.go:141] libmachine: Decoding PEM data...
	I1018 09:15:32.234078  295389 main.go:141] libmachine: Parsing certificate...
	I1018 09:15:32.234555  295389 cli_runner.go:164] Run: docker network inspect embed-certs-880603 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 09:15:32.255972  295389 cli_runner.go:211] docker network inspect embed-certs-880603 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 09:15:32.256059  295389 network_create.go:284] running [docker network inspect embed-certs-880603] to gather additional debugging logs...
	I1018 09:15:32.256079  295389 cli_runner.go:164] Run: docker network inspect embed-certs-880603
	W1018 09:15:32.277611  295389 cli_runner.go:211] docker network inspect embed-certs-880603 returned with exit code 1
	I1018 09:15:32.277639  295389 network_create.go:287] error running [docker network inspect embed-certs-880603]: docker network inspect embed-certs-880603: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-880603 not found
	I1018 09:15:32.277651  295389 network_create.go:289] output of [docker network inspect embed-certs-880603]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-880603 not found
	
	** /stderr **
	I1018 09:15:32.277778  295389 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:15:32.301309  295389 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0a5d0734e8e5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ba:09:81:3f:ef:cf} reservation:<nil>}
	I1018 09:15:32.302278  295389 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-0be1ffd412fe IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3e:00:46:36:7b:65} reservation:<nil>}
	I1018 09:15:32.303325  295389 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-e93e49dbe6fd IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:52:68:21:3c:ba:1e} reservation:<nil>}
	I1018 09:15:32.304628  295389 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ec7ab0}
	I1018 09:15:32.304708  295389 network_create.go:124] attempt to create docker network embed-certs-880603 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1018 09:15:32.304779  295389 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-880603 embed-certs-880603
	I1018 09:15:32.381666  295389 network_create.go:108] docker network embed-certs-880603 192.168.76.0/24 created
	I1018 09:15:32.381699  295389 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-880603" container
	I1018 09:15:32.381798  295389 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 09:15:32.401640  295389 cli_runner.go:164] Run: docker volume create embed-certs-880603 --label name.minikube.sigs.k8s.io=embed-certs-880603 --label created_by.minikube.sigs.k8s.io=true
	I1018 09:15:32.422540  295389 oci.go:103] Successfully created a docker volume embed-certs-880603
	I1018 09:15:32.422618  295389 cli_runner.go:164] Run: docker run --rm --name embed-certs-880603-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-880603 --entrypoint /usr/bin/test -v embed-certs-880603:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 09:15:32.865542  295389 oci.go:107] Successfully prepared a docker volume embed-certs-880603
	I1018 09:15:32.865607  295389 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:15:32.865629  295389 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 09:15:32.865726  295389 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-5897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-880603:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1018 09:15:33.935277  285675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:15:34.435510  285675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:15:34.935025  285675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:15:35.435706  285675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:15:35.935579  285675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:15:36.435528  285675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:15:36.934884  285675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:15:37.434897  285675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:15:37.542243  285675 kubeadm.go:1113] duration metric: took 4.205033258s to wait for elevateKubeSystemPrivileges
	I1018 09:15:37.542276  285675 kubeadm.go:402] duration metric: took 15.653050858s to StartCluster
	I1018 09:15:37.542299  285675 settings.go:142] acquiring lock: {Name:mk177870d6cf7000f95346d8b9c104ade730278a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:15:37.542400  285675 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-5897/kubeconfig
	I1018 09:15:37.544117  285675 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/kubeconfig: {Name:mkbdf22e0f6c3f9f36e4cde3352b620d43ace448 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:15:37.544962  285675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 09:15:37.544984  285675 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:15:37.545265  285675 config.go:182] Loaded profile config "no-preload-031066": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:15:37.545317  285675 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 09:15:37.545424  285675 addons.go:69] Setting storage-provisioner=true in profile "no-preload-031066"
	I1018 09:15:37.545447  285675 addons.go:238] Setting addon storage-provisioner=true in "no-preload-031066"
	I1018 09:15:37.545479  285675 host.go:66] Checking if "no-preload-031066" exists ...
	I1018 09:15:37.545492  285675 addons.go:69] Setting default-storageclass=true in profile "no-preload-031066"
	I1018 09:15:37.545508  285675 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-031066"
	I1018 09:15:37.545834  285675 cli_runner.go:164] Run: docker container inspect no-preload-031066 --format={{.State.Status}}
	I1018 09:15:37.545995  285675 cli_runner.go:164] Run: docker container inspect no-preload-031066 --format={{.State.Status}}
	I1018 09:15:37.547505  285675 out.go:179] * Verifying Kubernetes components...
	I1018 09:15:37.549112  285675 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:15:37.581126  285675 addons.go:238] Setting addon default-storageclass=true in "no-preload-031066"
	I1018 09:15:37.581177  285675 host.go:66] Checking if "no-preload-031066" exists ...
	I1018 09:15:37.581763  285675 cli_runner.go:164] Run: docker container inspect no-preload-031066 --format={{.State.Status}}
	I1018 09:15:37.585895  285675 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:15:37.587609  285675 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:15:37.587632  285675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 09:15:37.587694  285675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-031066
	I1018 09:15:37.633093  285675 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 09:15:37.633119  285675 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 09:15:37.633379  285675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-031066
	I1018 09:15:37.634927  285675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/no-preload-031066/id_rsa Username:docker}
	I1018 09:15:37.668040  285675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/no-preload-031066/id_rsa Username:docker}
	I1018 09:15:37.680762  285675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 09:15:37.741473  285675 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:15:37.769011  285675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:15:37.794990  285675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 09:15:37.923955  285675 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1018 09:15:37.925528  285675 node_ready.go:35] waiting up to 6m0s for node "no-preload-031066" to be "Ready" ...
	I1018 09:15:38.173982  285675 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1018 09:15:38.175420  285675 addons.go:514] duration metric: took 630.09547ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1018 09:15:38.429469  285675 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-031066" context rescaled to 1 replicas
	W1018 09:15:35.854944  275240 pod_ready.go:104] pod "coredns-66bc5c9577-mvszb" is not "Ready", error: <nil>
	W1018 09:15:37.857842  275240 pod_ready.go:104] pod "coredns-66bc5c9577-mvszb" is not "Ready", error: <nil>
	W1018 09:15:40.354699  275240 pod_ready.go:104] pod "coredns-66bc5c9577-mvszb" is not "Ready", error: <nil>
	I1018 09:15:37.570463  295389 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-5897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-880603:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.704676283s)
	I1018 09:15:37.570508  295389 kic.go:203] duration metric: took 4.704875702s to extract preloaded images to volume ...
	W1018 09:15:37.570614  295389 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1018 09:15:37.570653  295389 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1018 09:15:37.570698  295389 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 09:15:37.691551  295389 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-880603 --name embed-certs-880603 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-880603 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-880603 --network embed-certs-880603 --ip 192.168.76.2 --volume embed-certs-880603:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 09:15:38.077977  295389 cli_runner.go:164] Run: docker container inspect embed-certs-880603 --format={{.State.Running}}
	I1018 09:15:38.102014  295389 cli_runner.go:164] Run: docker container inspect embed-certs-880603 --format={{.State.Status}}
	I1018 09:15:38.125671  295389 cli_runner.go:164] Run: docker exec embed-certs-880603 stat /var/lib/dpkg/alternatives/iptables
	I1018 09:15:38.177896  295389 oci.go:144] the created container "embed-certs-880603" has a running status.
	I1018 09:15:38.177930  295389 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21767-5897/.minikube/machines/embed-certs-880603/id_rsa...
	I1018 09:15:38.642256  295389 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21767-5897/.minikube/machines/embed-certs-880603/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 09:15:38.676713  295389 cli_runner.go:164] Run: docker container inspect embed-certs-880603 --format={{.State.Status}}
	I1018 09:15:38.702947  295389 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 09:15:38.702983  295389 kic_runner.go:114] Args: [docker exec --privileged embed-certs-880603 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 09:15:38.766592  295389 cli_runner.go:164] Run: docker container inspect embed-certs-880603 --format={{.State.Status}}
	I1018 09:15:38.792120  295389 machine.go:93] provisionDockerMachine start ...
	I1018 09:15:38.792220  295389 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-880603
	I1018 09:15:38.821161  295389 main.go:141] libmachine: Using SSH client type: native
	I1018 09:15:38.821544  295389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1018 09:15:38.821563  295389 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:15:38.978655  295389 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-880603
	
	I1018 09:15:38.978756  295389 ubuntu.go:182] provisioning hostname "embed-certs-880603"
	I1018 09:15:38.978850  295389 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-880603
	I1018 09:15:39.004942  295389 main.go:141] libmachine: Using SSH client type: native
	I1018 09:15:39.005320  295389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1018 09:15:39.005388  295389 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-880603 && echo "embed-certs-880603" | sudo tee /etc/hostname
	I1018 09:15:39.176612  295389 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-880603
	
	I1018 09:15:39.176723  295389 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-880603
	I1018 09:15:39.200427  295389 main.go:141] libmachine: Using SSH client type: native
	I1018 09:15:39.200706  295389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1018 09:15:39.200737  295389 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-880603' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-880603/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-880603' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:15:39.351904  295389 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:15:39.351937  295389 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-5897/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-5897/.minikube}
	I1018 09:15:39.351981  295389 ubuntu.go:190] setting up certificates
	I1018 09:15:39.352002  295389 provision.go:84] configureAuth start
	I1018 09:15:39.352069  295389 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-880603
	I1018 09:15:39.378627  295389 provision.go:143] copyHostCerts
	I1018 09:15:39.378707  295389 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem, removing ...
	I1018 09:15:39.378719  295389 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem
	I1018 09:15:39.378790  295389 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem (1078 bytes)
	I1018 09:15:39.378957  295389 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem, removing ...
	I1018 09:15:39.378973  295389 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem
	I1018 09:15:39.379016  295389 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem (1123 bytes)
	I1018 09:15:39.379114  295389 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem, removing ...
	I1018 09:15:39.379126  295389 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem
	I1018 09:15:39.379166  295389 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem (1675 bytes)
	I1018 09:15:39.379264  295389 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem org=jenkins.embed-certs-880603 san=[127.0.0.1 192.168.76.2 embed-certs-880603 localhost minikube]
	I1018 09:15:39.569781  295389 provision.go:177] copyRemoteCerts
	I1018 09:15:39.569855  295389 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:15:39.569903  295389 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-880603
	I1018 09:15:39.594280  295389 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/embed-certs-880603/id_rsa Username:docker}
	I1018 09:15:39.695654  295389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:15:39.722744  295389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1018 09:15:39.743245  295389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 09:15:39.764479  295389 provision.go:87] duration metric: took 412.459003ms to configureAuth
	I1018 09:15:39.764511  295389 ubuntu.go:206] setting minikube options for container-runtime
	I1018 09:15:39.764708  295389 config.go:182] Loaded profile config "embed-certs-880603": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:15:39.764854  295389 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-880603
	I1018 09:15:39.784307  295389 main.go:141] libmachine: Using SSH client type: native
	I1018 09:15:39.784545  295389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1018 09:15:39.784562  295389 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:15:40.039180  295389 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:15:40.039209  295389 machine.go:96] duration metric: took 1.247061752s to provisionDockerMachine
	I1018 09:15:40.039223  295389 client.go:171] duration metric: took 7.80542558s to LocalClient.Create
	I1018 09:15:40.039254  295389 start.go:167] duration metric: took 7.805513563s to libmachine.API.Create "embed-certs-880603"
	I1018 09:15:40.039276  295389 start.go:293] postStartSetup for "embed-certs-880603" (driver="docker")
	I1018 09:15:40.039294  295389 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:15:40.039438  295389 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:15:40.039487  295389 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-880603
	I1018 09:15:40.058297  295389 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/embed-certs-880603/id_rsa Username:docker}
	I1018 09:15:40.159245  295389 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:15:40.163254  295389 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 09:15:40.163290  295389 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 09:15:40.163303  295389 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-5897/.minikube/addons for local assets ...
	I1018 09:15:40.163405  295389 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-5897/.minikube/files for local assets ...
	I1018 09:15:40.163509  295389 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem -> 93942.pem in /etc/ssl/certs
	I1018 09:15:40.163639  295389 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:15:40.172115  295389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem --> /etc/ssl/certs/93942.pem (1708 bytes)
	I1018 09:15:40.194950  295389 start.go:296] duration metric: took 155.654711ms for postStartSetup
	I1018 09:15:40.195402  295389 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-880603
	I1018 09:15:40.215443  295389 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/config.json ...
	I1018 09:15:40.215726  295389 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:15:40.215769  295389 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-880603
	I1018 09:15:40.234376  295389 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/embed-certs-880603/id_rsa Username:docker}
	I1018 09:15:40.329721  295389 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 09:15:40.334785  295389 start.go:128] duration metric: took 8.103677206s to createHost
	I1018 09:15:40.334809  295389 start.go:83] releasing machines lock for "embed-certs-880603", held for 8.103824249s
	I1018 09:15:40.334868  295389 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-880603
	I1018 09:15:40.354635  295389 ssh_runner.go:195] Run: cat /version.json
	I1018 09:15:40.354700  295389 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-880603
	I1018 09:15:40.354725  295389 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:15:40.354801  295389 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-880603
	I1018 09:15:40.373885  295389 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/embed-certs-880603/id_rsa Username:docker}
	I1018 09:15:40.375891  295389 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/embed-certs-880603/id_rsa Username:docker}
	I1018 09:15:40.477621  295389 ssh_runner.go:195] Run: systemctl --version
	I1018 09:15:40.540898  295389 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:15:40.579139  295389 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:15:40.584278  295389 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:15:40.584366  295389 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:15:40.612917  295389 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1018 09:15:40.612941  295389 start.go:495] detecting cgroup driver to use...
	I1018 09:15:40.612974  295389 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 09:15:40.613027  295389 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:15:40.629778  295389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:15:40.643333  295389 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:15:40.643415  295389 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:15:40.661898  295389 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:15:40.680868  295389 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:15:40.762191  295389 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:15:40.853709  295389 docker.go:234] disabling docker service ...
	I1018 09:15:40.853777  295389 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:15:40.873622  295389 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:15:40.887336  295389 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:15:40.976284  295389 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:15:41.065146  295389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:15:41.079121  295389 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:15:41.094714  295389 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 09:15:41.094764  295389 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:15:41.105859  295389 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 09:15:41.105913  295389 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:15:41.115704  295389 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:15:41.125267  295389 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:15:41.134829  295389 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:15:41.143790  295389 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:15:41.153409  295389 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:15:41.168473  295389 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:15:41.178426  295389 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:15:41.186821  295389 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:15:41.195001  295389 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:15:41.280099  295389 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:15:41.601720  295389 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:15:41.601802  295389 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:15:41.606255  295389 start.go:563] Will wait 60s for crictl version
	I1018 09:15:41.606320  295389 ssh_runner.go:195] Run: which crictl
	I1018 09:15:41.610390  295389 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 09:15:41.636513  295389 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 09:15:41.636596  295389 ssh_runner.go:195] Run: crio --version
	I1018 09:15:41.666372  295389 ssh_runner.go:195] Run: crio --version
	I1018 09:15:41.697849  295389 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 09:15:41.699241  295389 cli_runner.go:164] Run: docker network inspect embed-certs-880603 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:15:41.718518  295389 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1018 09:15:41.723028  295389 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:15:41.734463  295389 kubeadm.go:883] updating cluster {Name:embed-certs-880603 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-880603 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:15:41.734573  295389 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:15:41.734623  295389 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:15:41.767221  295389 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:15:41.767241  295389 crio.go:433] Images already preloaded, skipping extraction
	I1018 09:15:41.767291  295389 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:15:41.795400  295389 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:15:41.795422  295389 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:15:41.795429  295389 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1018 09:15:41.795522  295389 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-880603 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-880603 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:15:41.795627  295389 ssh_runner.go:195] Run: crio config
	I1018 09:15:41.843775  295389 cni.go:84] Creating CNI manager for ""
	I1018 09:15:41.843801  295389 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:15:41.843825  295389 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 09:15:41.843847  295389 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-880603 NodeName:embed-certs-880603 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:15:41.843988  295389 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-880603"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:15:41.844048  295389 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 09:15:41.853671  295389 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:15:41.853744  295389 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:15:41.862858  295389 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1018 09:15:41.876963  295389 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:15:41.894051  295389 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1018 09:15:41.907831  295389 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1018 09:15:41.911850  295389 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:15:41.922821  295389 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	W1018 09:15:39.929245  285675 node_ready.go:57] node "no-preload-031066" has "Ready":"False" status (will retry)
	W1018 09:15:41.929817  285675 node_ready.go:57] node "no-preload-031066" has "Ready":"False" status (will retry)
	I1018 09:15:42.015006  295389 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:15:42.040920  295389 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603 for IP: 192.168.76.2
	I1018 09:15:42.040946  295389 certs.go:195] generating shared ca certs ...
	I1018 09:15:42.040969  295389 certs.go:227] acquiring lock for ca certs: {Name:mk550b60d986fbbdf7b5e0015c56234b739f3162 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:15:42.041123  295389 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-5897/.minikube/ca.key
	I1018 09:15:42.041159  295389 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.key
	I1018 09:15:42.041169  295389 certs.go:257] generating profile certs ...
	I1018 09:15:42.041229  295389 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/client.key
	I1018 09:15:42.041248  295389 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/client.crt with IP's: []
	I1018 09:15:42.348714  295389 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/client.crt ...
	I1018 09:15:42.348763  295389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/client.crt: {Name:mkfcbb26b0c0fddf2e62728597f176b171231f25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:15:42.348998  295389 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/client.key ...
	I1018 09:15:42.349021  295389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/client.key: {Name:mkf79e98db8fc4b219ddc41f01278546f024072c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:15:42.349152  295389 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/apiserver.key.d64b1fe7
	I1018 09:15:42.349177  295389 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/apiserver.crt.d64b1fe7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1018 09:15:43.054283  295389 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/apiserver.crt.d64b1fe7 ...
	I1018 09:15:43.054310  295389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/apiserver.crt.d64b1fe7: {Name:mk8d1335a0e1ace11ffdf1a21dc71f25fac69c93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:15:43.054517  295389 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/apiserver.key.d64b1fe7 ...
	I1018 09:15:43.054535  295389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/apiserver.key.d64b1fe7: {Name:mkbfd66b890f8dc243c5bdc50cbf46ac1edeb490 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:15:43.054628  295389 certs.go:382] copying /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/apiserver.crt.d64b1fe7 -> /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/apiserver.crt
	I1018 09:15:43.054706  295389 certs.go:386] copying /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/apiserver.key.d64b1fe7 -> /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/apiserver.key
	I1018 09:15:43.054763  295389 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/proxy-client.key
	I1018 09:15:43.054778  295389 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/proxy-client.crt with IP's: []
	I1018 09:15:43.200283  295389 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/proxy-client.crt ...
	I1018 09:15:43.200309  295389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/proxy-client.crt: {Name:mke17b12873a1f95776cc7750eb7023cc38f351c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:15:43.200497  295389 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/proxy-client.key ...
	I1018 09:15:43.200510  295389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/proxy-client.key: {Name:mk4a4481ee8ad8fff82932bca75d858c18666d4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:15:43.200691  295389 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/9394.pem (1338 bytes)
	W1018 09:15:43.200726  295389 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-5897/.minikube/certs/9394_empty.pem, impossibly tiny 0 bytes
	I1018 09:15:43.200736  295389 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 09:15:43.200755  295389 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:15:43.200776  295389 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:15:43.200797  295389 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem (1675 bytes)
	I1018 09:15:43.200833  295389 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem (1708 bytes)
	I1018 09:15:43.201338  295389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:15:43.221136  295389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 09:15:43.240631  295389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:15:43.259605  295389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 09:15:43.278725  295389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1018 09:15:43.297623  295389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 09:15:43.317184  295389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:15:43.336726  295389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 09:15:43.357452  295389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/certs/9394.pem --> /usr/share/ca-certificates/9394.pem (1338 bytes)
	I1018 09:15:43.377895  295389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem --> /usr/share/ca-certificates/93942.pem (1708 bytes)
	I1018 09:15:43.397640  295389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:15:43.417982  295389 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:15:43.432495  295389 ssh_runner.go:195] Run: openssl version
	I1018 09:15:43.439052  295389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9394.pem && ln -fs /usr/share/ca-certificates/9394.pem /etc/ssl/certs/9394.pem"
	I1018 09:15:43.448566  295389 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9394.pem
	I1018 09:15:43.453009  295389 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 08:35 /usr/share/ca-certificates/9394.pem
	I1018 09:15:43.453082  295389 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9394.pem
	I1018 09:15:43.489084  295389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9394.pem /etc/ssl/certs/51391683.0"
	I1018 09:15:43.498797  295389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93942.pem && ln -fs /usr/share/ca-certificates/93942.pem /etc/ssl/certs/93942.pem"
	I1018 09:15:43.509163  295389 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93942.pem
	I1018 09:15:43.514217  295389 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 08:35 /usr/share/ca-certificates/93942.pem
	I1018 09:15:43.514284  295389 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93942.pem
	I1018 09:15:43.554285  295389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93942.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:15:43.563761  295389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:15:43.573614  295389 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:15:43.577928  295389 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:15:43.577995  295389 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:15:43.614369  295389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:15:43.624275  295389 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:15:43.628386  295389 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 09:15:43.628453  295389 kubeadm.go:400] StartCluster: {Name:embed-certs-880603 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-880603 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:15:43.628527  295389 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:15:43.628592  295389 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:15:43.657012  295389 cri.go:89] found id: ""
	I1018 09:15:43.657088  295389 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:15:43.666493  295389 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 09:15:43.675282  295389 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 09:15:43.675366  295389 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 09:15:43.684084  295389 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 09:15:43.684105  295389 kubeadm.go:157] found existing configuration files:
	
	I1018 09:15:43.684170  295389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 09:15:43.692972  295389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 09:15:43.693027  295389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 09:15:43.701505  295389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 09:15:43.710206  295389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 09:15:43.710272  295389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 09:15:43.718731  295389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 09:15:43.727759  295389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 09:15:43.727820  295389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 09:15:43.736019  295389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 09:15:43.744393  295389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 09:15:43.744463  295389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 09:15:43.752571  295389 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 09:15:43.793524  295389 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 09:15:43.793597  295389 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 09:15:43.817251  295389 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 09:15:43.817432  295389 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1018 09:15:43.817490  295389 kubeadm.go:318] OS: Linux
	I1018 09:15:43.817552  295389 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 09:15:43.817647  295389 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 09:15:43.817730  295389 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 09:15:43.817798  295389 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 09:15:43.817872  295389 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 09:15:43.817959  295389 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 09:15:43.818026  295389 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 09:15:43.818093  295389 kubeadm.go:318] CGROUPS_IO: enabled
	I1018 09:15:43.891827  295389 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 09:15:43.891982  295389 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 09:15:43.892123  295389 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 09:15:43.902044  295389 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1018 09:15:42.355329  275240 pod_ready.go:104] pod "coredns-66bc5c9577-mvszb" is not "Ready", error: <nil>
	I1018 09:15:43.854876  275240 pod_ready.go:94] pod "coredns-66bc5c9577-mvszb" is "Ready"
	I1018 09:15:43.854908  275240 pod_ready.go:86] duration metric: took 37.506231953s for pod "coredns-66bc5c9577-mvszb" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:15:43.854921  275240 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-sv2l8" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:15:43.857076  275240 pod_ready.go:99] pod "coredns-66bc5c9577-sv2l8" in "kube-system" namespace is gone: getting pod "coredns-66bc5c9577-sv2l8" in "kube-system" namespace (will retry): pods "coredns-66bc5c9577-sv2l8" not found
	I1018 09:15:43.857102  275240 pod_ready.go:86] duration metric: took 2.173408ms for pod "coredns-66bc5c9577-sv2l8" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:15:43.860097  275240 pod_ready.go:83] waiting for pod "etcd-enable-default-cni-448954" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:15:43.864943  275240 pod_ready.go:94] pod "etcd-enable-default-cni-448954" is "Ready"
	I1018 09:15:43.864969  275240 pod_ready.go:86] duration metric: took 4.845664ms for pod "etcd-enable-default-cni-448954" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:15:43.867276  275240 pod_ready.go:83] waiting for pod "kube-apiserver-enable-default-cni-448954" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:15:43.871790  275240 pod_ready.go:94] pod "kube-apiserver-enable-default-cni-448954" is "Ready"
	I1018 09:15:43.871811  275240 pod_ready.go:86] duration metric: took 4.513529ms for pod "kube-apiserver-enable-default-cni-448954" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:15:43.874029  275240 pod_ready.go:83] waiting for pod "kube-controller-manager-enable-default-cni-448954" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:15:44.253454  275240 pod_ready.go:94] pod "kube-controller-manager-enable-default-cni-448954" is "Ready"
	I1018 09:15:44.253479  275240 pod_ready.go:86] duration metric: took 379.427235ms for pod "kube-controller-manager-enable-default-cni-448954" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:15:44.452416  275240 pod_ready.go:83] waiting for pod "kube-proxy-6sbvw" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:15:44.853062  275240 pod_ready.go:94] pod "kube-proxy-6sbvw" is "Ready"
	I1018 09:15:44.853093  275240 pod_ready.go:86] duration metric: took 400.649312ms for pod "kube-proxy-6sbvw" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:15:45.054092  275240 pod_ready.go:83] waiting for pod "kube-scheduler-enable-default-cni-448954" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:15:45.452397  275240 pod_ready.go:94] pod "kube-scheduler-enable-default-cni-448954" is "Ready"
	I1018 09:15:45.452423  275240 pod_ready.go:86] duration metric: took 398.304908ms for pod "kube-scheduler-enable-default-cni-448954" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:15:45.452438  275240 pod_ready.go:40] duration metric: took 39.108344409s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:15:45.511510  275240 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 09:15:45.515774  275240 out.go:179] * Done! kubectl is now configured to use "enable-default-cni-448954" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 09:15:31 old-k8s-version-951975 crio[771]: time="2025-10-18T09:15:31.881067934Z" level=info msg="Starting container: 9ac1c873ee749f4188536f8d880d3fee38c9996025f7a985eb5b42f2d8dd28ce" id=275c4397-89fc-45a6-9ff7-4b3b24ea6d45 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:15:31 old-k8s-version-951975 crio[771]: time="2025-10-18T09:15:31.88360608Z" level=info msg="Started container" PID=2118 containerID=9ac1c873ee749f4188536f8d880d3fee38c9996025f7a985eb5b42f2d8dd28ce description=kube-system/coredns-5dd5756b68-gwttp/coredns id=275c4397-89fc-45a6-9ff7-4b3b24ea6d45 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5e668ef1bd94430dd3b30e1f09ca35529882d0c37def2c2e34c25a6c4994438d
	Oct 18 09:15:34 old-k8s-version-951975 crio[771]: time="2025-10-18T09:15:34.965689994Z" level=info msg="Running pod sandbox: default/busybox/POD" id=b466c2a0-cceb-4b81-a51b-54e3bdcdf9c0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:15:34 old-k8s-version-951975 crio[771]: time="2025-10-18T09:15:34.96582509Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:15:34 old-k8s-version-951975 crio[771]: time="2025-10-18T09:15:34.973590795Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:1e7299224d92cbaab360ad61c0ab234bbc00803fe8ed3921ad142a3fc47e766e UID:5e92717a-fb6d-4a62-a6dd-08ea5401487b NetNS:/var/run/netns/7d903656-37f8-4c00-a325-ed8678b964c4 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0004f8a78}] Aliases:map[]}"
	Oct 18 09:15:34 old-k8s-version-951975 crio[771]: time="2025-10-18T09:15:34.976480769Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 18 09:15:34 old-k8s-version-951975 crio[771]: time="2025-10-18T09:15:34.990472441Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:1e7299224d92cbaab360ad61c0ab234bbc00803fe8ed3921ad142a3fc47e766e UID:5e92717a-fb6d-4a62-a6dd-08ea5401487b NetNS:/var/run/netns/7d903656-37f8-4c00-a325-ed8678b964c4 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0004f8a78}] Aliases:map[]}"
	Oct 18 09:15:34 old-k8s-version-951975 crio[771]: time="2025-10-18T09:15:34.990620344Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 18 09:15:34 old-k8s-version-951975 crio[771]: time="2025-10-18T09:15:34.991470322Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 18 09:15:34 old-k8s-version-951975 crio[771]: time="2025-10-18T09:15:34.992790226Z" level=info msg="Ran pod sandbox 1e7299224d92cbaab360ad61c0ab234bbc00803fe8ed3921ad142a3fc47e766e with infra container: default/busybox/POD" id=b466c2a0-cceb-4b81-a51b-54e3bdcdf9c0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:15:34 old-k8s-version-951975 crio[771]: time="2025-10-18T09:15:34.994249292Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=dded82e0-a5a3-4aec-9565-4a4e9bc5076a name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:15:34 old-k8s-version-951975 crio[771]: time="2025-10-18T09:15:34.994463005Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=dded82e0-a5a3-4aec-9565-4a4e9bc5076a name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:15:34 old-k8s-version-951975 crio[771]: time="2025-10-18T09:15:34.994512891Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=dded82e0-a5a3-4aec-9565-4a4e9bc5076a name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:15:34 old-k8s-version-951975 crio[771]: time="2025-10-18T09:15:34.995109571Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=cf9e3b06-4b58-460c-b5f0-16b8b8c6a3ad name=/runtime.v1.ImageService/PullImage
	Oct 18 09:15:34 old-k8s-version-951975 crio[771]: time="2025-10-18T09:15:34.997078467Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 18 09:15:37 old-k8s-version-951975 crio[771]: time="2025-10-18T09:15:37.527475433Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=cf9e3b06-4b58-460c-b5f0-16b8b8c6a3ad name=/runtime.v1.ImageService/PullImage
	Oct 18 09:15:37 old-k8s-version-951975 crio[771]: time="2025-10-18T09:15:37.528489312Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c5a3b550-7aae-4cdf-8463-66fdc9e85adc name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:15:37 old-k8s-version-951975 crio[771]: time="2025-10-18T09:15:37.530036927Z" level=info msg="Creating container: default/busybox/busybox" id=0f39868a-2995-4ab0-8049-d5d0b4aa12cb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:15:37 old-k8s-version-951975 crio[771]: time="2025-10-18T09:15:37.530969367Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:15:37 old-k8s-version-951975 crio[771]: time="2025-10-18T09:15:37.538129667Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:15:37 old-k8s-version-951975 crio[771]: time="2025-10-18T09:15:37.538850017Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:15:37 old-k8s-version-951975 crio[771]: time="2025-10-18T09:15:37.586950456Z" level=info msg="Created container 7226e3542346359bc2c8cdcf866dccd7eebb6e40b4a228f91a4068985223f3b3: default/busybox/busybox" id=0f39868a-2995-4ab0-8049-d5d0b4aa12cb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:15:37 old-k8s-version-951975 crio[771]: time="2025-10-18T09:15:37.588302961Z" level=info msg="Starting container: 7226e3542346359bc2c8cdcf866dccd7eebb6e40b4a228f91a4068985223f3b3" id=fd82bfd2-e506-4034-8930-5dc7b800b0b6 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:15:37 old-k8s-version-951975 crio[771]: time="2025-10-18T09:15:37.592019942Z" level=info msg="Started container" PID=2192 containerID=7226e3542346359bc2c8cdcf866dccd7eebb6e40b4a228f91a4068985223f3b3 description=default/busybox/busybox id=fd82bfd2-e506-4034-8930-5dc7b800b0b6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1e7299224d92cbaab360ad61c0ab234bbc00803fe8ed3921ad142a3fc47e766e
	Oct 18 09:15:44 old-k8s-version-951975 crio[771]: time="2025-10-18T09:15:44.742367449Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	7226e35423463       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   1e7299224d92c       busybox                                          default
	9ac1c873ee749       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      14 seconds ago      Running             coredns                   0                   5e668ef1bd944       coredns-5dd5756b68-gwttp                         kube-system
	bb6dc6418ebe4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      14 seconds ago      Running             storage-provisioner       0                   b833ea6548fd4       storage-provisioner                              kube-system
	51c32e88b94c9       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    25 seconds ago      Running             kindnet-cni               0                   013afa675cadc       kindnet-k2756                                    kube-system
	4fbe65ccd36b7       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      28 seconds ago      Running             kube-proxy                0                   bc6e6a5400b67       kube-proxy-rrzqp                                 kube-system
	85917e7326d73       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      46 seconds ago      Running             kube-apiserver            0                   1f669ea0fca2e       kube-apiserver-old-k8s-version-951975            kube-system
	9b82da9faa712       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      46 seconds ago      Running             etcd                      0                   496a82689aaf8       etcd-old-k8s-version-951975                      kube-system
	1e07cc383fb4b       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      46 seconds ago      Running             kube-controller-manager   0                   651914cd9b73e       kube-controller-manager-old-k8s-version-951975   kube-system
	417995d0d15e9       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      46 seconds ago      Running             kube-scheduler            0                   a00e8dbff4735       kube-scheduler-old-k8s-version-951975            kube-system
	
	
	==> coredns [9ac1c873ee749f4188536f8d880d3fee38c9996025f7a985eb5b42f2d8dd28ce] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:51349 - 21550 "HINFO IN 7236540853016443704.6333381631828322766. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.09748792s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-951975
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-951975
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a39cecdc22b5fb611b15c7501c7459c3b4d2820
	                    minikube.k8s.io/name=old-k8s-version-951975
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T09_15_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 09:15:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-951975
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 09:15:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 09:15:36 +0000   Sat, 18 Oct 2025 09:15:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 09:15:36 +0000   Sat, 18 Oct 2025 09:15:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 09:15:36 +0000   Sat, 18 Oct 2025 09:15:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 09:15:36 +0000   Sat, 18 Oct 2025 09:15:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-951975
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                bca7ca56-ad4d-4955-80a5-36cf90a3bf8e
	  Boot ID:                    e8d7ef1f-87bb-488c-8381-e18fe85b484f
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-5dd5756b68-gwttp                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     29s
	  kube-system                 etcd-old-k8s-version-951975                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         41s
	  kube-system                 kindnet-k2756                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-old-k8s-version-951975             250m (3%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-controller-manager-old-k8s-version-951975    200m (2%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-proxy-rrzqp                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-old-k8s-version-951975             100m (1%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28s                kube-proxy       
	  Normal  NodeHasSufficientMemory  47s (x8 over 47s)  kubelet          Node old-k8s-version-951975 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    47s (x8 over 47s)  kubelet          Node old-k8s-version-951975 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     47s (x8 over 47s)  kubelet          Node old-k8s-version-951975 status is now: NodeHasSufficientPID
	  Normal  Starting                 41s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  41s                kubelet          Node old-k8s-version-951975 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s                kubelet          Node old-k8s-version-951975 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s                kubelet          Node old-k8s-version-951975 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           30s                node-controller  Node old-k8s-version-951975 event: Registered Node old-k8s-version-951975 in Controller
	  Normal  NodeReady                15s                kubelet          Node old-k8s-version-951975 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000028] ll header: 00000000: 0e c2 2a eb 06 e6 66 0c bb bb 22 50 08 00
	[Oct18 08:33] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 0e c2 2a eb 06 e6 66 0c bb bb 22 50 08 00
	[Oct18 09:14] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3e 73 e0 1e 4f 5c 08 06
	[  +0.001176] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9e 01 6a be c1 ed 08 06
	[  +1.096145] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 92 07 d0 c5 bc 08 06
	[  +0.000393] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff de 8d 0a a3 cc 78 08 06
	[ +17.591772] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 8a 16 36 e8 43 c0 08 06
	[  +0.000413] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3e 73 e0 1e 4f 5c 08 06
	[ +11.820741] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 36 5b 8e 46 ea 47 08 06
	[Oct18 09:15] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 28 86 11 d4 e9 08 06
	[  +0.032974] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 76 2d 83 26 2e 28 08 06
	[  +4.435535] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 22 e2 07 5a 3b 4a 08 06
	[  +0.000382] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 36 5b 8e 46 ea 47 08 06
	
	
	==> etcd [9b82da9faa7123c9a1332a38b37766a898ddaee4c58dff07cdce1fb152645990] <==
	{"level":"info","ts":"2025-10-18T09:15:02.015332Z","caller":"traceutil/trace.go:171","msg":"trace[1038842167] transaction","detail":"{read_only:false; response_revision:5; number_of_response:1; }","duration":"226.512634ms","start":"2025-10-18T09:15:01.788795Z","end":"2025-10-18T09:15:02.015307Z","steps":["trace[1038842167] 'process raft request'  (duration: 225.994612ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:15:02.015403Z","caller":"traceutil/trace.go:171","msg":"trace[38832515] range","detail":"{range_begin:/registry/minions/old-k8s-version-951975; range_end:; response_count:1; response_revision:12; }","duration":"227.054717ms","start":"2025-10-18T09:15:01.788326Z","end":"2025-10-18T09:15:02.015381Z","steps":["trace[38832515] 'agreement among raft nodes before linearized reading'  (duration: 226.791629ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T09:15:02.015448Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.834851ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/default\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-10-18T09:15:02.015495Z","caller":"traceutil/trace.go:171","msg":"trace[109474416] range","detail":"{range_begin:/registry/namespaces/default; range_end:; response_count:0; response_revision:12; }","duration":"136.879981ms","start":"2025-10-18T09:15:01.878595Z","end":"2025-10-18T09:15:02.015475Z","steps":["trace[109474416] 'agreement among raft nodes before linearized reading'  (duration: 136.804566ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:15:02.015555Z","caller":"traceutil/trace.go:171","msg":"trace[1467791774] transaction","detail":"{read_only:false; response_revision:6; number_of_response:1; }","duration":"226.464264ms","start":"2025-10-18T09:15:01.78908Z","end":"2025-10-18T09:15:02.015545Z","steps":["trace[1467791774] 'process raft request'  (duration: 225.806419ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T09:15:02.015604Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"164.997956ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:350"}
	{"level":"info","ts":"2025-10-18T09:15:02.015636Z","caller":"traceutil/trace.go:171","msg":"trace[991813541] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:12; }","duration":"165.032929ms","start":"2025-10-18T09:15:01.850592Z","end":"2025-10-18T09:15:02.015625Z","steps":["trace[991813541] 'agreement among raft nodes before linearized reading'  (duration: 164.974417ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:15:02.015637Z","caller":"traceutil/trace.go:171","msg":"trace[338400414] transaction","detail":"{read_only:false; response_revision:11; number_of_response:1; }","duration":"188.035624ms","start":"2025-10-18T09:15:01.827591Z","end":"2025-10-18T09:15:02.015627Z","steps":["trace[338400414] 'process raft request'  (duration: 187.450406ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:15:02.015655Z","caller":"traceutil/trace.go:171","msg":"trace[1781908875] transaction","detail":"{read_only:false; response_revision:7; number_of_response:1; }","duration":"226.52106ms","start":"2025-10-18T09:15:01.789127Z","end":"2025-10-18T09:15:02.015648Z","steps":["trace[1781908875] 'process raft request'  (duration: 225.801371ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:15:02.015757Z","caller":"traceutil/trace.go:171","msg":"trace[394914234] transaction","detail":"{read_only:false; response_revision:8; number_of_response:1; }","duration":"226.588409ms","start":"2025-10-18T09:15:01.789161Z","end":"2025-10-18T09:15:02.015749Z","steps":["trace[394914234] 'process raft request'  (duration: 225.796254ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T09:15:02.015778Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"200.853112ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:350"}
	{"level":"info","ts":"2025-10-18T09:15:02.015804Z","caller":"traceutil/trace.go:171","msg":"trace[1966561751] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:12; }","duration":"200.881822ms","start":"2025-10-18T09:15:01.814915Z","end":"2025-10-18T09:15:02.015797Z","steps":["trace[1966561751] 'agreement among raft nodes before linearized reading'  (duration: 200.832784ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:15:02.015817Z","caller":"traceutil/trace.go:171","msg":"trace[1183304463] transaction","detail":"{read_only:false; response_revision:12; number_of_response:1; }","duration":"177.159861ms","start":"2025-10-18T09:15:01.838648Z","end":"2025-10-18T09:15:02.015808Z","steps":["trace[1183304463] 'process raft request'  (duration: 176.427785ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:15:02.015901Z","caller":"traceutil/trace.go:171","msg":"trace[2019594591] transaction","detail":"{read_only:false; response_revision:9; number_of_response:1; }","duration":"226.721507ms","start":"2025-10-18T09:15:01.789168Z","end":"2025-10-18T09:15:02.015889Z","steps":["trace[2019594591] 'process raft request'  (duration: 225.813608ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T09:15:02.015945Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"227.091583ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:350"}
	{"level":"info","ts":"2025-10-18T09:15:02.015969Z","caller":"traceutil/trace.go:171","msg":"trace[464226466] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:12; }","duration":"227.119126ms","start":"2025-10-18T09:15:01.788843Z","end":"2025-10-18T09:15:02.015962Z","steps":["trace[464226466] 'agreement among raft nodes before linearized reading'  (duration: 227.07129ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:15:02.016081Z","caller":"traceutil/trace.go:171","msg":"trace[364066574] transaction","detail":"{read_only:false; response_revision:10; number_of_response:1; }","duration":"226.718961ms","start":"2025-10-18T09:15:01.789354Z","end":"2025-10-18T09:15:02.016072Z","steps":["trace[364066574] 'process raft request'  (duration: 225.659568ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:15:17.217449Z","caller":"traceutil/trace.go:171","msg":"trace[1334935978] linearizableReadLoop","detail":"{readStateIndex:345; appliedIndex:344; }","duration":"132.424271ms","start":"2025-10-18T09:15:17.084961Z","end":"2025-10-18T09:15:17.217385Z","steps":["trace[1334935978] 'read index received'  (duration: 50.092598ms)","trace[1334935978] 'applied index is now lower than readState.Index'  (duration: 82.330101ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T09:15:17.217743Z","caller":"traceutil/trace.go:171","msg":"trace[1177451281] transaction","detail":"{read_only:false; response_revision:335; number_of_response:1; }","duration":"164.747986ms","start":"2025-10-18T09:15:17.052977Z","end":"2025-10-18T09:15:17.217725Z","steps":["trace[1177451281] 'process raft request'  (duration: 82.141893ms)","trace[1177451281] 'compare'  (duration: 82.070534ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T09:15:17.217978Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"133.033836ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/ttl-controller\" ","response":"range_response_count:1 size:193"}
	{"level":"info","ts":"2025-10-18T09:15:17.218018Z","caller":"traceutil/trace.go:171","msg":"trace[1287871299] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/ttl-controller; range_end:; response_count:1; response_revision:335; }","duration":"133.091476ms","start":"2025-10-18T09:15:17.084916Z","end":"2025-10-18T09:15:17.218007Z","steps":["trace[1287871299] 'agreement among raft nodes before linearized reading'  (duration: 132.992348ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:15:17.430848Z","caller":"traceutil/trace.go:171","msg":"trace[51087272] transaction","detail":"{read_only:false; response_revision:337; number_of_response:1; }","duration":"122.63001ms","start":"2025-10-18T09:15:17.308195Z","end":"2025-10-18T09:15:17.430825Z","steps":["trace[51087272] 'process raft request'  (duration: 38.725966ms)","trace[51087272] 'compare'  (duration: 83.789098ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T09:15:17.433782Z","caller":"traceutil/trace.go:171","msg":"trace[542526456] transaction","detail":"{read_only:false; response_revision:338; number_of_response:1; }","duration":"123.260513ms","start":"2025-10-18T09:15:17.3105Z","end":"2025-10-18T09:15:17.43376Z","steps":["trace[542526456] 'process raft request'  (duration: 122.942386ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T09:15:17.433969Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.316332ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/certificate-controller\" ","response":"range_response_count:1 size:209"}
	{"level":"info","ts":"2025-10-18T09:15:17.433996Z","caller":"traceutil/trace.go:171","msg":"trace[79717307] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/certificate-controller; range_end:; response_count:1; response_revision:338; }","duration":"100.36124ms","start":"2025-10-18T09:15:17.333626Z","end":"2025-10-18T09:15:17.433988Z","steps":["trace[79717307] 'agreement among raft nodes before linearized reading'  (duration: 100.287419ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:15:46 up 58 min,  0 user,  load average: 4.37, 3.45, 2.27
	Linux old-k8s-version-951975 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [51c32e88b94c9540e141c0ffb42b2757b33d1c247490bd564402e84ad5db7e11] <==
	I1018 09:15:21.037561       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 09:15:21.037862       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1018 09:15:21.038089       1 main.go:148] setting mtu 1500 for CNI 
	I1018 09:15:21.038109       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 09:15:21.038136       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T09:15:21Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 09:15:21.239147       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 09:15:21.336944       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 09:15:21.336995       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 09:15:21.337210       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 09:15:21.637119       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 09:15:21.637156       1 metrics.go:72] Registering metrics
	I1018 09:15:21.637236       1 controller.go:711] "Syncing nftables rules"
	I1018 09:15:31.242954       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1018 09:15:31.243075       1 main.go:301] handling current node
	I1018 09:15:41.239432       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1018 09:15:41.239472       1 main.go:301] handling current node
	
	
	==> kube-apiserver [85917e7326d73ab3b352ba2b38d9433d1480e342dc18fa832d812033c4ff45e2] <==
	I1018 09:15:01.786475       1 aggregator.go:166] initial CRD sync complete...
	I1018 09:15:01.786500       1 autoregister_controller.go:141] Starting autoregister controller
	I1018 09:15:01.786508       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 09:15:01.786517       1 cache.go:39] Caches are synced for autoregister controller
	I1018 09:15:01.787815       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1018 09:15:01.798061       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1018 09:15:01.798062       1 shared_informer.go:318] Caches are synced for configmaps
	I1018 09:15:01.798086       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1018 09:15:01.827186       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1018 09:15:02.017449       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 09:15:02.701796       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1018 09:15:02.709287       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1018 09:15:02.709306       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 09:15:03.384904       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 09:15:03.428719       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 09:15:03.522169       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1018 09:15:03.533136       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1018 09:15:03.534624       1 controller.go:624] quota admission added evaluator for: endpoints
	I1018 09:15:03.540133       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 09:15:03.779765       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1018 09:15:05.271742       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1018 09:15:05.292457       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1018 09:15:05.314008       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1018 09:15:17.470619       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1018 09:15:17.589885       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [1e07cc383fb4b22ab9b8bd25ead443a40ffbdf6c48af8d0f1d410e33d6e0591b] <==
	I1018 09:15:16.834085       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1018 09:15:16.841429       1 shared_informer.go:318] Caches are synced for persistent volume
	I1018 09:15:16.843793       1 shared_informer.go:318] Caches are synced for resource quota
	I1018 09:15:17.165990       1 shared_informer.go:318] Caches are synced for garbage collector
	I1018 09:15:17.237819       1 shared_informer.go:318] Caches are synced for garbage collector
	I1018 09:15:17.237853       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1018 09:15:17.605871       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-rrzqp"
	I1018 09:15:17.607483       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1018 09:15:17.609331       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-k2756"
	I1018 09:15:17.667446       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-hrppb"
	I1018 09:15:17.683040       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-gwttp"
	I1018 09:15:17.700059       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="101.506293ms"
	I1018 09:15:17.713738       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="13.603592ms"
	I1018 09:15:17.713870       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="82.867µs"
	I1018 09:15:17.888277       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1018 09:15:17.905132       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-hrppb"
	I1018 09:15:17.920383       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="32.665327ms"
	I1018 09:15:17.937656       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="17.207967ms"
	I1018 09:15:17.937785       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="82.187µs"
	I1018 09:15:31.523251       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="120.617µs"
	I1018 09:15:31.546965       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="136.519µs"
	I1018 09:15:31.780292       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1018 09:15:32.547364       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="70.588µs"
	I1018 09:15:32.582187       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.109519ms"
	I1018 09:15:32.582332       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="94.283µs"
	
	
	==> kube-proxy [4fbe65ccd36b7c10f99f71de8bedbf397df7b1b03265de118240157b889e3764] <==
	I1018 09:15:18.070890       1 server_others.go:69] "Using iptables proxy"
	I1018 09:15:18.082609       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1018 09:15:18.106541       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 09:15:18.109674       1 server_others.go:152] "Using iptables Proxier"
	I1018 09:15:18.109730       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1018 09:15:18.109742       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1018 09:15:18.109789       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1018 09:15:18.110167       1 server.go:846] "Version info" version="v1.28.0"
	I1018 09:15:18.110188       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:15:18.110948       1 config.go:97] "Starting endpoint slice config controller"
	I1018 09:15:18.111046       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1018 09:15:18.111070       1 config.go:315] "Starting node config controller"
	I1018 09:15:18.111086       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1018 09:15:18.110953       1 config.go:188] "Starting service config controller"
	I1018 09:15:18.111608       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1018 09:15:18.211554       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1018 09:15:18.211601       1 shared_informer.go:318] Caches are synced for node config
	I1018 09:15:18.211714       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [417995d0d15e9e0ac4adc104c484d4958789ccd6a24334a3e41f15ecf3256477] <==
	W1018 09:15:02.757008       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1018 09:15:02.757053       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1018 09:15:02.794174       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1018 09:15:02.794221       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1018 09:15:02.918951       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1018 09:15:02.919515       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1018 09:15:02.935335       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1018 09:15:02.935395       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1018 09:15:02.978562       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1018 09:15:02.978710       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1018 09:15:03.013629       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1018 09:15:03.013704       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1018 09:15:03.023040       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1018 09:15:03.023092       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1018 09:15:03.059158       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1018 09:15:03.059291       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1018 09:15:03.073731       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1018 09:15:03.073771       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1018 09:15:03.099611       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1018 09:15:03.099658       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1018 09:15:03.102657       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1018 09:15:03.102828       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1018 09:15:03.332265       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1018 09:15:03.332312       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1018 09:15:05.778795       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 18 09:15:16 old-k8s-version-951975 kubelet[1396]: I1018 09:15:16.610196    1396 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 18 09:15:16 old-k8s-version-951975 kubelet[1396]: I1018 09:15:16.611099    1396 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 18 09:15:17 old-k8s-version-951975 kubelet[1396]: I1018 09:15:17.626541    1396 topology_manager.go:215] "Topology Admit Handler" podUID="a85dd000-dd96-42a9-bca1-92345ab498da" podNamespace="kube-system" podName="kindnet-k2756"
	Oct 18 09:15:17 old-k8s-version-951975 kubelet[1396]: I1018 09:15:17.629773    1396 topology_manager.go:215] "Topology Admit Handler" podUID="1dbe03c6-5db9-49c5-9016-c421b2d7c581" podNamespace="kube-system" podName="kube-proxy-rrzqp"
	Oct 18 09:15:17 old-k8s-version-951975 kubelet[1396]: I1018 09:15:17.668596    1396 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1dbe03c6-5db9-49c5-9016-c421b2d7c581-xtables-lock\") pod \"kube-proxy-rrzqp\" (UID: \"1dbe03c6-5db9-49c5-9016-c421b2d7c581\") " pod="kube-system/kube-proxy-rrzqp"
	Oct 18 09:15:17 old-k8s-version-951975 kubelet[1396]: I1018 09:15:17.668687    1396 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a85dd000-dd96-42a9-bca1-92345ab498da-cni-cfg\") pod \"kindnet-k2756\" (UID: \"a85dd000-dd96-42a9-bca1-92345ab498da\") " pod="kube-system/kindnet-k2756"
	Oct 18 09:15:17 old-k8s-version-951975 kubelet[1396]: I1018 09:15:17.668721    1396 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1dbe03c6-5db9-49c5-9016-c421b2d7c581-kube-proxy\") pod \"kube-proxy-rrzqp\" (UID: \"1dbe03c6-5db9-49c5-9016-c421b2d7c581\") " pod="kube-system/kube-proxy-rrzqp"
	Oct 18 09:15:17 old-k8s-version-951975 kubelet[1396]: I1018 09:15:17.668748    1396 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1dbe03c6-5db9-49c5-9016-c421b2d7c581-lib-modules\") pod \"kube-proxy-rrzqp\" (UID: \"1dbe03c6-5db9-49c5-9016-c421b2d7c581\") " pod="kube-system/kube-proxy-rrzqp"
	Oct 18 09:15:17 old-k8s-version-951975 kubelet[1396]: I1018 09:15:17.668778    1396 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a85dd000-dd96-42a9-bca1-92345ab498da-xtables-lock\") pod \"kindnet-k2756\" (UID: \"a85dd000-dd96-42a9-bca1-92345ab498da\") " pod="kube-system/kindnet-k2756"
	Oct 18 09:15:17 old-k8s-version-951975 kubelet[1396]: I1018 09:15:17.668803    1396 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a85dd000-dd96-42a9-bca1-92345ab498da-lib-modules\") pod \"kindnet-k2756\" (UID: \"a85dd000-dd96-42a9-bca1-92345ab498da\") " pod="kube-system/kindnet-k2756"
	Oct 18 09:15:17 old-k8s-version-951975 kubelet[1396]: I1018 09:15:17.668841    1396 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85q87\" (UniqueName: \"kubernetes.io/projected/a85dd000-dd96-42a9-bca1-92345ab498da-kube-api-access-85q87\") pod \"kindnet-k2756\" (UID: \"a85dd000-dd96-42a9-bca1-92345ab498da\") " pod="kube-system/kindnet-k2756"
	Oct 18 09:15:17 old-k8s-version-951975 kubelet[1396]: I1018 09:15:17.668909    1396 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsdtk\" (UniqueName: \"kubernetes.io/projected/1dbe03c6-5db9-49c5-9016-c421b2d7c581-kube-api-access-nsdtk\") pod \"kube-proxy-rrzqp\" (UID: \"1dbe03c6-5db9-49c5-9016-c421b2d7c581\") " pod="kube-system/kube-proxy-rrzqp"
	Oct 18 09:15:18 old-k8s-version-951975 kubelet[1396]: I1018 09:15:18.506795    1396 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-rrzqp" podStartSLOduration=1.5067293899999998 podCreationTimestamp="2025-10-18 09:15:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:15:18.506479877 +0000 UTC m=+13.272178770" watchObservedRunningTime="2025-10-18 09:15:18.50672939 +0000 UTC m=+13.272428283"
	Oct 18 09:15:21 old-k8s-version-951975 kubelet[1396]: I1018 09:15:21.516950    1396 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-k2756" podStartSLOduration=1.766000878 podCreationTimestamp="2025-10-18 09:15:17 +0000 UTC" firstStartedPulling="2025-10-18 09:15:17.954999338 +0000 UTC m=+12.720698215" lastFinishedPulling="2025-10-18 09:15:20.70588834 +0000 UTC m=+15.471587224" observedRunningTime="2025-10-18 09:15:21.516863951 +0000 UTC m=+16.282562866" watchObservedRunningTime="2025-10-18 09:15:21.516889887 +0000 UTC m=+16.282588780"
	Oct 18 09:15:31 old-k8s-version-951975 kubelet[1396]: I1018 09:15:31.489135    1396 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 18 09:15:31 old-k8s-version-951975 kubelet[1396]: I1018 09:15:31.521803    1396 topology_manager.go:215] "Topology Admit Handler" podUID="201e8ed1-c6b6-4ac6-ada3-291e6b900df8" podNamespace="kube-system" podName="storage-provisioner"
	Oct 18 09:15:31 old-k8s-version-951975 kubelet[1396]: I1018 09:15:31.523206    1396 topology_manager.go:215] "Topology Admit Handler" podUID="349d3695-c749-4802-a9eb-53de5ac78c69" podNamespace="kube-system" podName="coredns-5dd5756b68-gwttp"
	Oct 18 09:15:31 old-k8s-version-951975 kubelet[1396]: I1018 09:15:31.566594    1396 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/201e8ed1-c6b6-4ac6-ada3-291e6b900df8-tmp\") pod \"storage-provisioner\" (UID: \"201e8ed1-c6b6-4ac6-ada3-291e6b900df8\") " pod="kube-system/storage-provisioner"
	Oct 18 09:15:31 old-k8s-version-951975 kubelet[1396]: I1018 09:15:31.566662    1396 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjghc\" (UniqueName: \"kubernetes.io/projected/201e8ed1-c6b6-4ac6-ada3-291e6b900df8-kube-api-access-vjghc\") pod \"storage-provisioner\" (UID: \"201e8ed1-c6b6-4ac6-ada3-291e6b900df8\") " pod="kube-system/storage-provisioner"
	Oct 18 09:15:31 old-k8s-version-951975 kubelet[1396]: I1018 09:15:31.566728    1396 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/349d3695-c749-4802-a9eb-53de5ac78c69-config-volume\") pod \"coredns-5dd5756b68-gwttp\" (UID: \"349d3695-c749-4802-a9eb-53de5ac78c69\") " pod="kube-system/coredns-5dd5756b68-gwttp"
	Oct 18 09:15:31 old-k8s-version-951975 kubelet[1396]: I1018 09:15:31.566775    1396 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7gmq\" (UniqueName: \"kubernetes.io/projected/349d3695-c749-4802-a9eb-53de5ac78c69-kube-api-access-d7gmq\") pod \"coredns-5dd5756b68-gwttp\" (UID: \"349d3695-c749-4802-a9eb-53de5ac78c69\") " pod="kube-system/coredns-5dd5756b68-gwttp"
	Oct 18 09:15:32 old-k8s-version-951975 kubelet[1396]: I1018 09:15:32.547328    1396 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-gwttp" podStartSLOduration=15.547269708 podCreationTimestamp="2025-10-18 09:15:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:15:32.546999718 +0000 UTC m=+27.312698611" watchObservedRunningTime="2025-10-18 09:15:32.547269708 +0000 UTC m=+27.312968601"
	Oct 18 09:15:32 old-k8s-version-951975 kubelet[1396]: I1018 09:15:32.572890    1396 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.572831723 podCreationTimestamp="2025-10-18 09:15:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:15:32.558206972 +0000 UTC m=+27.323905867" watchObservedRunningTime="2025-10-18 09:15:32.572831723 +0000 UTC m=+27.338530617"
	Oct 18 09:15:34 old-k8s-version-951975 kubelet[1396]: I1018 09:15:34.662905    1396 topology_manager.go:215] "Topology Admit Handler" podUID="5e92717a-fb6d-4a62-a6dd-08ea5401487b" podNamespace="default" podName="busybox"
	Oct 18 09:15:34 old-k8s-version-951975 kubelet[1396]: I1018 09:15:34.687269    1396 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsqkc\" (UniqueName: \"kubernetes.io/projected/5e92717a-fb6d-4a62-a6dd-08ea5401487b-kube-api-access-fsqkc\") pod \"busybox\" (UID: \"5e92717a-fb6d-4a62-a6dd-08ea5401487b\") " pod="default/busybox"
	
	
	==> storage-provisioner [bb6dc6418ebe401b198aa158057ec7a1e6d8b1ea8d9563122120523feb2a6851] <==
	I1018 09:15:31.892768       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 09:15:31.903753       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 09:15:31.903807       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1018 09:15:31.913129       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 09:15:31.913286       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-951975_4a557dc1-9b6d-4a7d-97f0-3863e419f963!
	I1018 09:15:31.913393       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"27b4ae7e-91ad-46cf-b758-f945092ba79c", APIVersion:"v1", ResourceVersion:"431", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-951975_4a557dc1-9b6d-4a7d-97f0-3863e419f963 became leader
	I1018 09:15:32.013680       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-951975_4a557dc1-9b6d-4a7d-97f0-3863e419f963!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-951975 -n old-k8s-version-951975
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-951975 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-031066 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-031066 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (258.575734ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:16:00Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-031066 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-031066 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-031066 describe deploy/metrics-server -n kube-system: exit status 1 (78.041641ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-031066 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-031066
helpers_test.go:243: (dbg) docker inspect no-preload-031066:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dce899f902aef3d6f89585b10fccab0c498a8e85a102773c30f2d6dc5ea3fab0",
	        "Created": "2025-10-18T09:14:59.840380685Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 286262,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T09:14:59.891083979Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/dce899f902aef3d6f89585b10fccab0c498a8e85a102773c30f2d6dc5ea3fab0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dce899f902aef3d6f89585b10fccab0c498a8e85a102773c30f2d6dc5ea3fab0/hostname",
	        "HostsPath": "/var/lib/docker/containers/dce899f902aef3d6f89585b10fccab0c498a8e85a102773c30f2d6dc5ea3fab0/hosts",
	        "LogPath": "/var/lib/docker/containers/dce899f902aef3d6f89585b10fccab0c498a8e85a102773c30f2d6dc5ea3fab0/dce899f902aef3d6f89585b10fccab0c498a8e85a102773c30f2d6dc5ea3fab0-json.log",
	        "Name": "/no-preload-031066",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-031066:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-031066",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dce899f902aef3d6f89585b10fccab0c498a8e85a102773c30f2d6dc5ea3fab0",
	                "LowerDir": "/var/lib/docker/overlay2/1e0e83685e550417cddd524d2d8b786a0c193a25b235b1df64d1bc4562ba00b1-init/diff:/var/lib/docker/overlay2/76f783f469ac4c930bc111d7df4bd2b3a57bdcd762971c7ce0ba7a7b959771a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1e0e83685e550417cddd524d2d8b786a0c193a25b235b1df64d1bc4562ba00b1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1e0e83685e550417cddd524d2d8b786a0c193a25b235b1df64d1bc4562ba00b1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1e0e83685e550417cddd524d2d8b786a0c193a25b235b1df64d1bc4562ba00b1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-031066",
	                "Source": "/var/lib/docker/volumes/no-preload-031066/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-031066",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-031066",
	                "name.minikube.sigs.k8s.io": "no-preload-031066",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "be9add2293cc7cf02335bad86083999acac771f7c936b2540a0fd458638f5884",
	            "SandboxKey": "/var/run/docker/netns/be9add2293cc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-031066": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3e:29:a6:77:4b:b3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "659f168a65764f8b90baada540d0c1e70a7a90e0cd6e43139115c0a2c2f0c906",
	                    "EndpointID": "cfc4a8ba12aac35c36138a486a9b5f8c25f42d96617b91d6108dbf8ec4ae1390",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-031066",
	                        "dce899f902ae"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-031066 -n no-preload-031066
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-031066 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-031066 logs -n 25: (1.09583694s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                          │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p flannel-448954 sudo docker system info                                                                                                              │ flannel-448954            │ jenkins │ v1.37.0 │ 18 Oct 25 09:15 UTC │                     │
	│ ssh     │ -p flannel-448954 sudo systemctl status cri-docker --all --full --no-pager                                                                             │ flannel-448954            │ jenkins │ v1.37.0 │ 18 Oct 25 09:15 UTC │                     │
	│ ssh     │ -p flannel-448954 sudo systemctl cat cri-docker --no-pager                                                                                             │ flannel-448954            │ jenkins │ v1.37.0 │ 18 Oct 25 09:15 UTC │ 18 Oct 25 09:15 UTC │
	│ ssh     │ -p flannel-448954 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                        │ flannel-448954            │ jenkins │ v1.37.0 │ 18 Oct 25 09:15 UTC │                     │
	│ ssh     │ -p flannel-448954 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                  │ flannel-448954            │ jenkins │ v1.37.0 │ 18 Oct 25 09:15 UTC │ 18 Oct 25 09:15 UTC │
	│ ssh     │ -p flannel-448954 sudo cri-dockerd --version                                                                                                           │ flannel-448954            │ jenkins │ v1.37.0 │ 18 Oct 25 09:15 UTC │ 18 Oct 25 09:15 UTC │
	│ ssh     │ -p flannel-448954 sudo systemctl status containerd --all --full --no-pager                                                                             │ flannel-448954            │ jenkins │ v1.37.0 │ 18 Oct 25 09:15 UTC │                     │
	│ ssh     │ -p flannel-448954 sudo systemctl cat containerd --no-pager                                                                                             │ flannel-448954            │ jenkins │ v1.37.0 │ 18 Oct 25 09:15 UTC │ 18 Oct 25 09:15 UTC │
	│ ssh     │ -p flannel-448954 sudo cat /lib/systemd/system/containerd.service                                                                                      │ flannel-448954            │ jenkins │ v1.37.0 │ 18 Oct 25 09:15 UTC │ 18 Oct 25 09:15 UTC │
	│ ssh     │ -p flannel-448954 sudo cat /etc/containerd/config.toml                                                                                                 │ flannel-448954            │ jenkins │ v1.37.0 │ 18 Oct 25 09:15 UTC │ 18 Oct 25 09:15 UTC │
	│ ssh     │ -p flannel-448954 sudo containerd config dump                                                                                                          │ flannel-448954            │ jenkins │ v1.37.0 │ 18 Oct 25 09:15 UTC │ 18 Oct 25 09:15 UTC │
	│ ssh     │ -p flannel-448954 sudo systemctl status crio --all --full --no-pager                                                                                   │ flannel-448954            │ jenkins │ v1.37.0 │ 18 Oct 25 09:15 UTC │ 18 Oct 25 09:15 UTC │
	│ ssh     │ -p flannel-448954 sudo systemctl cat crio --no-pager                                                                                                   │ flannel-448954            │ jenkins │ v1.37.0 │ 18 Oct 25 09:15 UTC │ 18 Oct 25 09:15 UTC │
	│ ssh     │ -p flannel-448954 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                         │ flannel-448954            │ jenkins │ v1.37.0 │ 18 Oct 25 09:15 UTC │ 18 Oct 25 09:15 UTC │
	│ ssh     │ -p flannel-448954 sudo crio config                                                                                                                     │ flannel-448954            │ jenkins │ v1.37.0 │ 18 Oct 25 09:15 UTC │ 18 Oct 25 09:15 UTC │
	│ delete  │ -p flannel-448954                                                                                                                                      │ flannel-448954            │ jenkins │ v1.37.0 │ 18 Oct 25 09:15 UTC │ 18 Oct 25 09:15 UTC │
	│ start   │ -p embed-certs-880603 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ embed-certs-880603        │ jenkins │ v1.37.0 │ 18 Oct 25 09:15 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-951975 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain           │ old-k8s-version-951975    │ jenkins │ v1.37.0 │ 18 Oct 25 09:15 UTC │                     │
	│ ssh     │ -p enable-default-cni-448954 pgrep -a kubelet                                                                                                          │ enable-default-cni-448954 │ jenkins │ v1.37.0 │ 18 Oct 25 09:15 UTC │ 18 Oct 25 09:15 UTC │
	│ stop    │ -p old-k8s-version-951975 --alsologtostderr -v=3                                                                                                       │ old-k8s-version-951975    │ jenkins │ v1.37.0 │ 18 Oct 25 09:15 UTC │                     │
	│ ssh     │ -p enable-default-cni-448954 sudo cat /etc/nsswitch.conf                                                                                               │ enable-default-cni-448954 │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ ssh     │ -p enable-default-cni-448954 sudo cat /etc/hosts                                                                                                       │ enable-default-cni-448954 │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ addons  │ enable metrics-server -p no-preload-031066 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                │ no-preload-031066         │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │                     │
	│ ssh     │ -p enable-default-cni-448954 sudo cat /etc/resolv.conf                                                                                                 │ enable-default-cni-448954 │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ ssh     │ -p enable-default-cni-448954 sudo crictl pods                                                                                                          │ enable-default-cni-448954 │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 09:15:32
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 09:15:32.008965  295389 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:15:32.009235  295389 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:15:32.009245  295389 out.go:374] Setting ErrFile to fd 2...
	I1018 09:15:32.009253  295389 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:15:32.009574  295389 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-5897/.minikube/bin
	I1018 09:15:32.010217  295389 out.go:368] Setting JSON to false
	I1018 09:15:32.011540  295389 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3480,"bootTime":1760775452,"procs":327,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 09:15:32.011651  295389 start.go:141] virtualization: kvm guest
	I1018 09:15:32.014179  295389 out.go:179] * [embed-certs-880603] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 09:15:32.015497  295389 out.go:179]   - MINIKUBE_LOCATION=21767
	I1018 09:15:32.015499  295389 notify.go:220] Checking for updates...
	I1018 09:15:32.017790  295389 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:15:32.019169  295389 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-5897/kubeconfig
	I1018 09:15:32.020430  295389 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-5897/.minikube
	I1018 09:15:32.021658  295389 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 09:15:32.022996  295389 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:15:32.027124  295389 config.go:182] Loaded profile config "enable-default-cni-448954": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:15:32.027293  295389 config.go:182] Loaded profile config "no-preload-031066": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:15:32.027411  295389 config.go:182] Loaded profile config "old-k8s-version-951975": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1018 09:15:32.027526  295389 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:15:32.054525  295389 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 09:15:32.054611  295389 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:15:32.128651  295389 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:83 SystemTime:2025-10-18 09:15:32.117156244 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:15:32.128759  295389 docker.go:318] overlay module found
	I1018 09:15:32.130483  295389 out.go:179] * Using the docker driver based on user configuration
	I1018 09:15:32.131878  295389 start.go:305] selected driver: docker
	I1018 09:15:32.131900  295389 start.go:925] validating driver "docker" against <nil>
	I1018 09:15:32.131911  295389 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:15:32.132501  295389 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:15:32.197483  295389 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:84 SystemTime:2025-10-18 09:15:32.186550447 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:15:32.197677  295389 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 09:15:32.197911  295389 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:15:32.199614  295389 out.go:179] * Using Docker driver with root privileges
	I1018 09:15:32.201005  295389 cni.go:84] Creating CNI manager for ""
	I1018 09:15:32.201070  295389 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:15:32.201080  295389 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 09:15:32.201153  295389 start.go:349] cluster config:
	{Name:embed-certs-880603 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-880603 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:15:32.202491  295389 out.go:179] * Starting "embed-certs-880603" primary control-plane node in "embed-certs-880603" cluster
	I1018 09:15:32.203770  295389 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 09:15:32.204912  295389 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 09:15:32.205922  295389 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:15:32.205956  295389 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 09:15:32.205963  295389 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-5897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 09:15:32.206050  295389 cache.go:58] Caching tarball of preloaded images
	I1018 09:15:32.206154  295389 preload.go:233] Found /home/jenkins/minikube-integration/21767-5897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 09:15:32.206169  295389 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 09:15:32.206266  295389 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/config.json ...
	I1018 09:15:32.206285  295389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/config.json: {Name:mk23740dd9b3ceb9853336235bb2b2f334ebec71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:15:32.230777  295389 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 09:15:32.230801  295389 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 09:15:32.230821  295389 cache.go:232] Successfully downloaded all kic artifacts
	I1018 09:15:32.230852  295389 start.go:360] acquireMachinesLock for embed-certs-880603: {Name:mkdfbdbf4ee52d14237c1c3c1038142062936208 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:15:32.230972  295389 start.go:364] duration metric: took 100.257µs to acquireMachinesLock for "embed-certs-880603"
	I1018 09:15:32.231005  295389 start.go:93] Provisioning new machine with config: &{Name:embed-certs-880603 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-880603 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:15:32.231091  295389 start.go:125] createHost starting for "" (driver="docker")
	W1018 09:15:28.845071  278283 node_ready.go:57] node "old-k8s-version-951975" has "Ready":"False" status (will retry)
	W1018 09:15:31.344202  278283 node_ready.go:57] node "old-k8s-version-951975" has "Ready":"False" status (will retry)
	I1018 09:15:31.845510  278283 node_ready.go:49] node "old-k8s-version-951975" is "Ready"
	I1018 09:15:31.845542  278283 node_ready.go:38] duration metric: took 14.004731455s for node "old-k8s-version-951975" to be "Ready" ...
	I1018 09:15:31.845559  278283 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:15:31.845616  278283 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:15:31.861586  278283 api_server.go:72] duration metric: took 14.578860164s to wait for apiserver process to appear ...
	I1018 09:15:31.861614  278283 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:15:31.861638  278283 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1018 09:15:31.867174  278283 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1018 09:15:31.868791  278283 api_server.go:141] control plane version: v1.28.0
	I1018 09:15:31.868821  278283 api_server.go:131] duration metric: took 7.19826ms to wait for apiserver health ...
	I1018 09:15:31.868831  278283 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:15:31.874120  278283 system_pods.go:59] 8 kube-system pods found
	I1018 09:15:31.874167  278283 system_pods.go:61] "coredns-5dd5756b68-gwttp" [349d3695-c749-4802-a9eb-53de5ac78c69] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:15:31.874178  278283 system_pods.go:61] "etcd-old-k8s-version-951975" [557b1dae-5b7e-411f-bd2d-47ed28a669e8] Running
	I1018 09:15:31.874187  278283 system_pods.go:61] "kindnet-k2756" [a85dd000-dd96-42a9-bca1-92345ab498da] Running
	I1018 09:15:31.874194  278283 system_pods.go:61] "kube-apiserver-old-k8s-version-951975" [6c6bba36-eef9-430d-9910-6872feda0163] Running
	I1018 09:15:31.874201  278283 system_pods.go:61] "kube-controller-manager-old-k8s-version-951975" [68e333df-a9a4-4bdb-85c5-765aa1551f0d] Running
	I1018 09:15:31.874206  278283 system_pods.go:61] "kube-proxy-rrzqp" [1dbe03c6-5db9-49c5-9016-c421b2d7c581] Running
	I1018 09:15:31.874212  278283 system_pods.go:61] "kube-scheduler-old-k8s-version-951975" [cf36a929-52ed-4029-86b9-610775599e13] Running
	I1018 09:15:31.874221  278283 system_pods.go:61] "storage-provisioner" [201e8ed1-c6b6-4ac6-ada3-291e6b900df8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:15:31.874228  278283 system_pods.go:74] duration metric: took 5.39076ms to wait for pod list to return data ...
	I1018 09:15:31.874245  278283 default_sa.go:34] waiting for default service account to be created ...
	I1018 09:15:31.877787  278283 default_sa.go:45] found service account: "default"
	I1018 09:15:31.877814  278283 default_sa.go:55] duration metric: took 3.562051ms for default service account to be created ...
	I1018 09:15:31.877826  278283 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 09:15:31.885357  278283 system_pods.go:86] 8 kube-system pods found
	I1018 09:15:31.885399  278283 system_pods.go:89] "coredns-5dd5756b68-gwttp" [349d3695-c749-4802-a9eb-53de5ac78c69] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:15:31.885409  278283 system_pods.go:89] "etcd-old-k8s-version-951975" [557b1dae-5b7e-411f-bd2d-47ed28a669e8] Running
	I1018 09:15:31.885420  278283 system_pods.go:89] "kindnet-k2756" [a85dd000-dd96-42a9-bca1-92345ab498da] Running
	I1018 09:15:31.885426  278283 system_pods.go:89] "kube-apiserver-old-k8s-version-951975" [6c6bba36-eef9-430d-9910-6872feda0163] Running
	I1018 09:15:31.885435  278283 system_pods.go:89] "kube-controller-manager-old-k8s-version-951975" [68e333df-a9a4-4bdb-85c5-765aa1551f0d] Running
	I1018 09:15:31.885441  278283 system_pods.go:89] "kube-proxy-rrzqp" [1dbe03c6-5db9-49c5-9016-c421b2d7c581] Running
	I1018 09:15:31.885446  278283 system_pods.go:89] "kube-scheduler-old-k8s-version-951975" [cf36a929-52ed-4029-86b9-610775599e13] Running
	I1018 09:15:31.885461  278283 system_pods.go:89] "storage-provisioner" [201e8ed1-c6b6-4ac6-ada3-291e6b900df8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:15:31.885491  278283 retry.go:31] will retry after 283.843392ms: missing components: kube-dns
	I1018 09:15:32.175401  278283 system_pods.go:86] 8 kube-system pods found
	I1018 09:15:32.175443  278283 system_pods.go:89] "coredns-5dd5756b68-gwttp" [349d3695-c749-4802-a9eb-53de5ac78c69] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:15:32.175451  278283 system_pods.go:89] "etcd-old-k8s-version-951975" [557b1dae-5b7e-411f-bd2d-47ed28a669e8] Running
	I1018 09:15:32.175461  278283 system_pods.go:89] "kindnet-k2756" [a85dd000-dd96-42a9-bca1-92345ab498da] Running
	I1018 09:15:32.175469  278283 system_pods.go:89] "kube-apiserver-old-k8s-version-951975" [6c6bba36-eef9-430d-9910-6872feda0163] Running
	I1018 09:15:32.175476  278283 system_pods.go:89] "kube-controller-manager-old-k8s-version-951975" [68e333df-a9a4-4bdb-85c5-765aa1551f0d] Running
	I1018 09:15:32.175481  278283 system_pods.go:89] "kube-proxy-rrzqp" [1dbe03c6-5db9-49c5-9016-c421b2d7c581] Running
	I1018 09:15:32.175487  278283 system_pods.go:89] "kube-scheduler-old-k8s-version-951975" [cf36a929-52ed-4029-86b9-610775599e13] Running
	I1018 09:15:32.175501  278283 system_pods.go:89] "storage-provisioner" [201e8ed1-c6b6-4ac6-ada3-291e6b900df8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:15:32.175518  278283 retry.go:31] will retry after 293.269633ms: missing components: kube-dns
	I1018 09:15:32.474124  278283 system_pods.go:86] 8 kube-system pods found
	I1018 09:15:32.474167  278283 system_pods.go:89] "coredns-5dd5756b68-gwttp" [349d3695-c749-4802-a9eb-53de5ac78c69] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:15:32.474178  278283 system_pods.go:89] "etcd-old-k8s-version-951975" [557b1dae-5b7e-411f-bd2d-47ed28a669e8] Running
	I1018 09:15:32.474188  278283 system_pods.go:89] "kindnet-k2756" [a85dd000-dd96-42a9-bca1-92345ab498da] Running
	I1018 09:15:32.474194  278283 system_pods.go:89] "kube-apiserver-old-k8s-version-951975" [6c6bba36-eef9-430d-9910-6872feda0163] Running
	I1018 09:15:32.474201  278283 system_pods.go:89] "kube-controller-manager-old-k8s-version-951975" [68e333df-a9a4-4bdb-85c5-765aa1551f0d] Running
	I1018 09:15:32.474206  278283 system_pods.go:89] "kube-proxy-rrzqp" [1dbe03c6-5db9-49c5-9016-c421b2d7c581] Running
	I1018 09:15:32.474213  278283 system_pods.go:89] "kube-scheduler-old-k8s-version-951975" [cf36a929-52ed-4029-86b9-610775599e13] Running
	I1018 09:15:32.474228  278283 system_pods.go:89] "storage-provisioner" [201e8ed1-c6b6-4ac6-ada3-291e6b900df8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:15:32.474254  278283 retry.go:31] will retry after 328.832401ms: missing components: kube-dns
	I1018 09:15:32.808628  278283 system_pods.go:86] 8 kube-system pods found
	I1018 09:15:32.808674  278283 system_pods.go:89] "coredns-5dd5756b68-gwttp" [349d3695-c749-4802-a9eb-53de5ac78c69] Running
	I1018 09:15:32.808684  278283 system_pods.go:89] "etcd-old-k8s-version-951975" [557b1dae-5b7e-411f-bd2d-47ed28a669e8] Running
	I1018 09:15:32.808689  278283 system_pods.go:89] "kindnet-k2756" [a85dd000-dd96-42a9-bca1-92345ab498da] Running
	I1018 09:15:32.808695  278283 system_pods.go:89] "kube-apiserver-old-k8s-version-951975" [6c6bba36-eef9-430d-9910-6872feda0163] Running
	I1018 09:15:32.808711  278283 system_pods.go:89] "kube-controller-manager-old-k8s-version-951975" [68e333df-a9a4-4bdb-85c5-765aa1551f0d] Running
	I1018 09:15:32.808718  278283 system_pods.go:89] "kube-proxy-rrzqp" [1dbe03c6-5db9-49c5-9016-c421b2d7c581] Running
	I1018 09:15:32.808727  278283 system_pods.go:89] "kube-scheduler-old-k8s-version-951975" [cf36a929-52ed-4029-86b9-610775599e13] Running
	I1018 09:15:32.808733  278283 system_pods.go:89] "storage-provisioner" [201e8ed1-c6b6-4ac6-ada3-291e6b900df8] Running
	I1018 09:15:32.808745  278283 system_pods.go:126] duration metric: took 930.909907ms to wait for k8s-apps to be running ...
	I1018 09:15:32.808760  278283 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 09:15:32.808810  278283 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:15:32.824848  278283 system_svc.go:56] duration metric: took 16.07781ms WaitForService to wait for kubelet
	I1018 09:15:32.824880  278283 kubeadm.go:586] duration metric: took 15.542161861s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:15:32.824902  278283 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:15:32.828117  278283 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 09:15:32.828143  278283 node_conditions.go:123] node cpu capacity is 8
	I1018 09:15:32.828159  278283 node_conditions.go:105] duration metric: took 3.252974ms to run NodePressure ...
	I1018 09:15:32.828172  278283 start.go:241] waiting for startup goroutines ...
	I1018 09:15:32.828181  278283 start.go:246] waiting for cluster config update ...
	I1018 09:15:32.828199  278283 start.go:255] writing updated cluster config ...
	I1018 09:15:32.828503  278283 ssh_runner.go:195] Run: rm -f paused
	I1018 09:15:32.833552  278283 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:15:32.838471  278283 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-gwttp" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:15:32.844815  278283 pod_ready.go:94] pod "coredns-5dd5756b68-gwttp" is "Ready"
	I1018 09:15:32.844841  278283 pod_ready.go:86] duration metric: took 6.340697ms for pod "coredns-5dd5756b68-gwttp" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:15:32.848479  278283 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-951975" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:15:32.854136  278283 pod_ready.go:94] pod "etcd-old-k8s-version-951975" is "Ready"
	I1018 09:15:32.854166  278283 pod_ready.go:86] duration metric: took 5.64737ms for pod "etcd-old-k8s-version-951975" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:15:32.860230  278283 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-951975" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:15:32.865472  278283 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-951975" is "Ready"
	I1018 09:15:32.865504  278283 pod_ready.go:86] duration metric: took 5.249944ms for pod "kube-apiserver-old-k8s-version-951975" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:15:32.868862  278283 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-951975" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:15:31.691516  285675 out.go:252]   - Configuring RBAC rules ...
	I1018 09:15:31.691688  285675 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 09:15:31.698230  285675 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 09:15:31.704460  285675 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 09:15:31.707490  285675 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 09:15:31.710509  285675 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 09:15:31.720041  285675 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 09:15:32.044031  285675 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 09:15:32.469144  285675 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 09:15:33.042364  285675 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 09:15:33.043616  285675 kubeadm.go:318] 
	I1018 09:15:33.043732  285675 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 09:15:33.043750  285675 kubeadm.go:318] 
	I1018 09:15:33.043884  285675 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 09:15:33.043902  285675 kubeadm.go:318] 
	I1018 09:15:33.043937  285675 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 09:15:33.044031  285675 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 09:15:33.044113  285675 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 09:15:33.044128  285675 kubeadm.go:318] 
	I1018 09:15:33.044203  285675 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 09:15:33.044212  285675 kubeadm.go:318] 
	I1018 09:15:33.044279  285675 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 09:15:33.044287  285675 kubeadm.go:318] 
	I1018 09:15:33.044391  285675 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 09:15:33.044474  285675 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 09:15:33.044573  285675 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 09:15:33.044590  285675 kubeadm.go:318] 
	I1018 09:15:33.044726  285675 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 09:15:33.044842  285675 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 09:15:33.044858  285675 kubeadm.go:318] 
	I1018 09:15:33.044996  285675 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token dp9bve.11qtsrmx8h95i336 \
	I1018 09:15:33.045154  285675 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:03f732b5d900f8eb7de41cf71a6356f3c4edf03d7a3795a959179e2391e7734f \
	I1018 09:15:33.045213  285675 kubeadm.go:318] 	--control-plane 
	I1018 09:15:33.045224  285675 kubeadm.go:318] 
	I1018 09:15:33.045381  285675 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 09:15:33.045398  285675 kubeadm.go:318] 
	I1018 09:15:33.045516  285675 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token dp9bve.11qtsrmx8h95i336 \
	I1018 09:15:33.045676  285675 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:03f732b5d900f8eb7de41cf71a6356f3c4edf03d7a3795a959179e2391e7734f 
	I1018 09:15:33.048195  285675 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1018 09:15:33.048334  285675 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 09:15:33.048384  285675 cni.go:84] Creating CNI manager for ""
	I1018 09:15:33.048394  285675 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:15:33.050368  285675 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 09:15:33.051650  285675 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 09:15:33.057270  285675 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 09:15:33.057292  285675 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 09:15:33.074946  285675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 09:15:33.337199  285675 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 09:15:33.337280  285675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:15:33.337288  285675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-031066 minikube.k8s.io/updated_at=2025_10_18T09_15_33_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2a39cecdc22b5fb611b15c7501c7459c3b4d2820 minikube.k8s.io/name=no-preload-031066 minikube.k8s.io/primary=true
	I1018 09:15:33.434790  285675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:15:33.438470  285675 ops.go:34] apiserver oom_adj: -16
	I1018 09:15:33.238462  278283 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-951975" is "Ready"
	I1018 09:15:33.238489  278283 pod_ready.go:86] duration metric: took 369.604169ms for pod "kube-controller-manager-old-k8s-version-951975" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:15:33.439364  278283 pod_ready.go:83] waiting for pod "kube-proxy-rrzqp" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:15:33.837915  278283 pod_ready.go:94] pod "kube-proxy-rrzqp" is "Ready"
	I1018 09:15:33.837945  278283 pod_ready.go:86] duration metric: took 398.554496ms for pod "kube-proxy-rrzqp" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:15:34.039441  278283 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-951975" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:15:34.438779  278283 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-951975" is "Ready"
	I1018 09:15:34.438803  278283 pod_ready.go:86] duration metric: took 399.332707ms for pod "kube-scheduler-old-k8s-version-951975" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:15:34.438814  278283 pod_ready.go:40] duration metric: took 1.605221382s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:15:34.490030  278283 start.go:624] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1018 09:15:34.493297  278283 out.go:203] 
	W1018 09:15:34.494858  278283 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1018 09:15:34.496973  278283 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1018 09:15:34.499306  278283 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-951975" cluster and "default" namespace by default
	W1018 09:15:31.354275  275240 pod_ready.go:104] pod "coredns-66bc5c9577-mvszb" is not "Ready", error: <nil>
	W1018 09:15:33.357639  275240 pod_ready.go:104] pod "coredns-66bc5c9577-mvszb" is not "Ready", error: <nil>
	I1018 09:15:32.233405  295389 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1018 09:15:32.233743  295389 start.go:159] libmachine.API.Create for "embed-certs-880603" (driver="docker")
	I1018 09:15:32.233784  295389 client.go:168] LocalClient.Create starting
	I1018 09:15:32.233874  295389 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem
	I1018 09:15:32.233921  295389 main.go:141] libmachine: Decoding PEM data...
	I1018 09:15:32.233942  295389 main.go:141] libmachine: Parsing certificate...
	I1018 09:15:32.234017  295389 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem
	I1018 09:15:32.234053  295389 main.go:141] libmachine: Decoding PEM data...
	I1018 09:15:32.234078  295389 main.go:141] libmachine: Parsing certificate...
	I1018 09:15:32.234555  295389 cli_runner.go:164] Run: docker network inspect embed-certs-880603 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 09:15:32.255972  295389 cli_runner.go:211] docker network inspect embed-certs-880603 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 09:15:32.256059  295389 network_create.go:284] running [docker network inspect embed-certs-880603] to gather additional debugging logs...
	I1018 09:15:32.256079  295389 cli_runner.go:164] Run: docker network inspect embed-certs-880603
	W1018 09:15:32.277611  295389 cli_runner.go:211] docker network inspect embed-certs-880603 returned with exit code 1
	I1018 09:15:32.277639  295389 network_create.go:287] error running [docker network inspect embed-certs-880603]: docker network inspect embed-certs-880603: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-880603 not found
	I1018 09:15:32.277651  295389 network_create.go:289] output of [docker network inspect embed-certs-880603]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-880603 not found
	
	** /stderr **
	I1018 09:15:32.277778  295389 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:15:32.301309  295389 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0a5d0734e8e5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ba:09:81:3f:ef:cf} reservation:<nil>}
	I1018 09:15:32.302278  295389 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-0be1ffd412fe IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3e:00:46:36:7b:65} reservation:<nil>}
	I1018 09:15:32.303325  295389 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-e93e49dbe6fd IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:52:68:21:3c:ba:1e} reservation:<nil>}
	I1018 09:15:32.304628  295389 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ec7ab0}
	I1018 09:15:32.304708  295389 network_create.go:124] attempt to create docker network embed-certs-880603 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1018 09:15:32.304779  295389 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-880603 embed-certs-880603
	I1018 09:15:32.381666  295389 network_create.go:108] docker network embed-certs-880603 192.168.76.0/24 created
	I1018 09:15:32.381699  295389 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-880603" container
	I1018 09:15:32.381798  295389 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 09:15:32.401640  295389 cli_runner.go:164] Run: docker volume create embed-certs-880603 --label name.minikube.sigs.k8s.io=embed-certs-880603 --label created_by.minikube.sigs.k8s.io=true
	I1018 09:15:32.422540  295389 oci.go:103] Successfully created a docker volume embed-certs-880603
	I1018 09:15:32.422618  295389 cli_runner.go:164] Run: docker run --rm --name embed-certs-880603-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-880603 --entrypoint /usr/bin/test -v embed-certs-880603:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 09:15:32.865542  295389 oci.go:107] Successfully prepared a docker volume embed-certs-880603
	I1018 09:15:32.865607  295389 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:15:32.865629  295389 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 09:15:32.865726  295389 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-5897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-880603:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1018 09:15:33.935277  285675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:15:34.435510  285675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:15:34.935025  285675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:15:35.435706  285675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:15:35.935579  285675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:15:36.435528  285675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:15:36.934884  285675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:15:37.434897  285675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:15:37.542243  285675 kubeadm.go:1113] duration metric: took 4.205033258s to wait for elevateKubeSystemPrivileges
	I1018 09:15:37.542276  285675 kubeadm.go:402] duration metric: took 15.653050858s to StartCluster
	I1018 09:15:37.542299  285675 settings.go:142] acquiring lock: {Name:mk177870d6cf7000f95346d8b9c104ade730278a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:15:37.542400  285675 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-5897/kubeconfig
	I1018 09:15:37.544117  285675 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/kubeconfig: {Name:mkbdf22e0f6c3f9f36e4cde3352b620d43ace448 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:15:37.544962  285675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 09:15:37.544984  285675 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:15:37.545265  285675 config.go:182] Loaded profile config "no-preload-031066": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:15:37.545317  285675 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 09:15:37.545424  285675 addons.go:69] Setting storage-provisioner=true in profile "no-preload-031066"
	I1018 09:15:37.545447  285675 addons.go:238] Setting addon storage-provisioner=true in "no-preload-031066"
	I1018 09:15:37.545479  285675 host.go:66] Checking if "no-preload-031066" exists ...
	I1018 09:15:37.545492  285675 addons.go:69] Setting default-storageclass=true in profile "no-preload-031066"
	I1018 09:15:37.545508  285675 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-031066"
	I1018 09:15:37.545834  285675 cli_runner.go:164] Run: docker container inspect no-preload-031066 --format={{.State.Status}}
	I1018 09:15:37.545995  285675 cli_runner.go:164] Run: docker container inspect no-preload-031066 --format={{.State.Status}}
	I1018 09:15:37.547505  285675 out.go:179] * Verifying Kubernetes components...
	I1018 09:15:37.549112  285675 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:15:37.581126  285675 addons.go:238] Setting addon default-storageclass=true in "no-preload-031066"
	I1018 09:15:37.581177  285675 host.go:66] Checking if "no-preload-031066" exists ...
	I1018 09:15:37.581763  285675 cli_runner.go:164] Run: docker container inspect no-preload-031066 --format={{.State.Status}}
	I1018 09:15:37.585895  285675 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:15:37.587609  285675 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:15:37.587632  285675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 09:15:37.587694  285675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-031066
	I1018 09:15:37.633093  285675 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 09:15:37.633119  285675 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 09:15:37.633379  285675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-031066
	I1018 09:15:37.634927  285675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/no-preload-031066/id_rsa Username:docker}
	I1018 09:15:37.668040  285675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/no-preload-031066/id_rsa Username:docker}
	I1018 09:15:37.680762  285675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 09:15:37.741473  285675 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:15:37.769011  285675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:15:37.794990  285675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 09:15:37.923955  285675 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1018 09:15:37.925528  285675 node_ready.go:35] waiting up to 6m0s for node "no-preload-031066" to be "Ready" ...
	I1018 09:15:38.173982  285675 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1018 09:15:38.175420  285675 addons.go:514] duration metric: took 630.09547ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1018 09:15:38.429469  285675 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-031066" context rescaled to 1 replicas
	W1018 09:15:35.854944  275240 pod_ready.go:104] pod "coredns-66bc5c9577-mvszb" is not "Ready", error: <nil>
	W1018 09:15:37.857842  275240 pod_ready.go:104] pod "coredns-66bc5c9577-mvszb" is not "Ready", error: <nil>
	W1018 09:15:40.354699  275240 pod_ready.go:104] pod "coredns-66bc5c9577-mvszb" is not "Ready", error: <nil>
	I1018 09:15:37.570463  295389 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-5897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-880603:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.704676283s)
	I1018 09:15:37.570508  295389 kic.go:203] duration metric: took 4.704875702s to extract preloaded images to volume ...
	W1018 09:15:37.570614  295389 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1018 09:15:37.570653  295389 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1018 09:15:37.570698  295389 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 09:15:37.691551  295389 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-880603 --name embed-certs-880603 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-880603 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-880603 --network embed-certs-880603 --ip 192.168.76.2 --volume embed-certs-880603:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 09:15:38.077977  295389 cli_runner.go:164] Run: docker container inspect embed-certs-880603 --format={{.State.Running}}
	I1018 09:15:38.102014  295389 cli_runner.go:164] Run: docker container inspect embed-certs-880603 --format={{.State.Status}}
	I1018 09:15:38.125671  295389 cli_runner.go:164] Run: docker exec embed-certs-880603 stat /var/lib/dpkg/alternatives/iptables
	I1018 09:15:38.177896  295389 oci.go:144] the created container "embed-certs-880603" has a running status.
	I1018 09:15:38.177930  295389 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21767-5897/.minikube/machines/embed-certs-880603/id_rsa...
	I1018 09:15:38.642256  295389 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21767-5897/.minikube/machines/embed-certs-880603/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 09:15:38.676713  295389 cli_runner.go:164] Run: docker container inspect embed-certs-880603 --format={{.State.Status}}
	I1018 09:15:38.702947  295389 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 09:15:38.702983  295389 kic_runner.go:114] Args: [docker exec --privileged embed-certs-880603 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 09:15:38.766592  295389 cli_runner.go:164] Run: docker container inspect embed-certs-880603 --format={{.State.Status}}
	I1018 09:15:38.792120  295389 machine.go:93] provisionDockerMachine start ...
	I1018 09:15:38.792220  295389 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-880603
	I1018 09:15:38.821161  295389 main.go:141] libmachine: Using SSH client type: native
	I1018 09:15:38.821544  295389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1018 09:15:38.821563  295389 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:15:38.978655  295389 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-880603
	
	I1018 09:15:38.978756  295389 ubuntu.go:182] provisioning hostname "embed-certs-880603"
	I1018 09:15:38.978850  295389 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-880603
	I1018 09:15:39.004942  295389 main.go:141] libmachine: Using SSH client type: native
	I1018 09:15:39.005320  295389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1018 09:15:39.005388  295389 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-880603 && echo "embed-certs-880603" | sudo tee /etc/hostname
	I1018 09:15:39.176612  295389 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-880603
	
	I1018 09:15:39.176723  295389 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-880603
	I1018 09:15:39.200427  295389 main.go:141] libmachine: Using SSH client type: native
	I1018 09:15:39.200706  295389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1018 09:15:39.200737  295389 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-880603' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-880603/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-880603' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:15:39.351904  295389 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:15:39.351937  295389 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-5897/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-5897/.minikube}
	I1018 09:15:39.351981  295389 ubuntu.go:190] setting up certificates
	I1018 09:15:39.352002  295389 provision.go:84] configureAuth start
	I1018 09:15:39.352069  295389 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-880603
	I1018 09:15:39.378627  295389 provision.go:143] copyHostCerts
	I1018 09:15:39.378707  295389 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem, removing ...
	I1018 09:15:39.378719  295389 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem
	I1018 09:15:39.378790  295389 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem (1078 bytes)
	I1018 09:15:39.378957  295389 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem, removing ...
	I1018 09:15:39.378973  295389 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem
	I1018 09:15:39.379016  295389 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem (1123 bytes)
	I1018 09:15:39.379114  295389 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem, removing ...
	I1018 09:15:39.379126  295389 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem
	I1018 09:15:39.379166  295389 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem (1675 bytes)
	I1018 09:15:39.379264  295389 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem org=jenkins.embed-certs-880603 san=[127.0.0.1 192.168.76.2 embed-certs-880603 localhost minikube]
	I1018 09:15:39.569781  295389 provision.go:177] copyRemoteCerts
	I1018 09:15:39.569855  295389 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:15:39.569903  295389 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-880603
	I1018 09:15:39.594280  295389 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/embed-certs-880603/id_rsa Username:docker}
	I1018 09:15:39.695654  295389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:15:39.722744  295389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1018 09:15:39.743245  295389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 09:15:39.764479  295389 provision.go:87] duration metric: took 412.459003ms to configureAuth
	I1018 09:15:39.764511  295389 ubuntu.go:206] setting minikube options for container-runtime
	I1018 09:15:39.764708  295389 config.go:182] Loaded profile config "embed-certs-880603": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:15:39.764854  295389 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-880603
	I1018 09:15:39.784307  295389 main.go:141] libmachine: Using SSH client type: native
	I1018 09:15:39.784545  295389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1018 09:15:39.784562  295389 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:15:40.039180  295389 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:15:40.039209  295389 machine.go:96] duration metric: took 1.247061752s to provisionDockerMachine
	I1018 09:15:40.039223  295389 client.go:171] duration metric: took 7.80542558s to LocalClient.Create
	I1018 09:15:40.039254  295389 start.go:167] duration metric: took 7.805513563s to libmachine.API.Create "embed-certs-880603"
	I1018 09:15:40.039276  295389 start.go:293] postStartSetup for "embed-certs-880603" (driver="docker")
	I1018 09:15:40.039294  295389 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:15:40.039438  295389 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:15:40.039487  295389 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-880603
	I1018 09:15:40.058297  295389 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/embed-certs-880603/id_rsa Username:docker}
	I1018 09:15:40.159245  295389 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:15:40.163254  295389 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 09:15:40.163290  295389 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 09:15:40.163303  295389 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-5897/.minikube/addons for local assets ...
	I1018 09:15:40.163405  295389 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-5897/.minikube/files for local assets ...
	I1018 09:15:40.163509  295389 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem -> 93942.pem in /etc/ssl/certs
	I1018 09:15:40.163639  295389 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:15:40.172115  295389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem --> /etc/ssl/certs/93942.pem (1708 bytes)
	I1018 09:15:40.194950  295389 start.go:296] duration metric: took 155.654711ms for postStartSetup
	I1018 09:15:40.195402  295389 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-880603
	I1018 09:15:40.215443  295389 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/config.json ...
	I1018 09:15:40.215726  295389 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:15:40.215769  295389 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-880603
	I1018 09:15:40.234376  295389 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/embed-certs-880603/id_rsa Username:docker}
	I1018 09:15:40.329721  295389 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 09:15:40.334785  295389 start.go:128] duration metric: took 8.103677206s to createHost
	I1018 09:15:40.334809  295389 start.go:83] releasing machines lock for "embed-certs-880603", held for 8.103824249s
	I1018 09:15:40.334868  295389 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-880603
	I1018 09:15:40.354635  295389 ssh_runner.go:195] Run: cat /version.json
	I1018 09:15:40.354700  295389 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-880603
	I1018 09:15:40.354725  295389 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:15:40.354801  295389 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-880603
	I1018 09:15:40.373885  295389 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/embed-certs-880603/id_rsa Username:docker}
	I1018 09:15:40.375891  295389 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/embed-certs-880603/id_rsa Username:docker}
	I1018 09:15:40.477621  295389 ssh_runner.go:195] Run: systemctl --version
	I1018 09:15:40.540898  295389 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:15:40.579139  295389 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:15:40.584278  295389 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:15:40.584366  295389 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:15:40.612917  295389 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1018 09:15:40.612941  295389 start.go:495] detecting cgroup driver to use...
	I1018 09:15:40.612974  295389 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 09:15:40.613027  295389 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:15:40.629778  295389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:15:40.643333  295389 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:15:40.643415  295389 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:15:40.661898  295389 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:15:40.680868  295389 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:15:40.762191  295389 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:15:40.853709  295389 docker.go:234] disabling docker service ...
	I1018 09:15:40.853777  295389 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:15:40.873622  295389 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:15:40.887336  295389 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:15:40.976284  295389 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:15:41.065146  295389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:15:41.079121  295389 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:15:41.094714  295389 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 09:15:41.094764  295389 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:15:41.105859  295389 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 09:15:41.105913  295389 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:15:41.115704  295389 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:15:41.125267  295389 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:15:41.134829  295389 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:15:41.143790  295389 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:15:41.153409  295389 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:15:41.168473  295389 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:15:41.178426  295389 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:15:41.186821  295389 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:15:41.195001  295389 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:15:41.280099  295389 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:15:41.601720  295389 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:15:41.601802  295389 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:15:41.606255  295389 start.go:563] Will wait 60s for crictl version
	I1018 09:15:41.606320  295389 ssh_runner.go:195] Run: which crictl
	I1018 09:15:41.610390  295389 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 09:15:41.636513  295389 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 09:15:41.636596  295389 ssh_runner.go:195] Run: crio --version
	I1018 09:15:41.666372  295389 ssh_runner.go:195] Run: crio --version
	I1018 09:15:41.697849  295389 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 09:15:41.699241  295389 cli_runner.go:164] Run: docker network inspect embed-certs-880603 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:15:41.718518  295389 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1018 09:15:41.723028  295389 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:15:41.734463  295389 kubeadm.go:883] updating cluster {Name:embed-certs-880603 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-880603 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:15:41.734573  295389 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:15:41.734623  295389 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:15:41.767221  295389 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:15:41.767241  295389 crio.go:433] Images already preloaded, skipping extraction
	I1018 09:15:41.767291  295389 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:15:41.795400  295389 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:15:41.795422  295389 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:15:41.795429  295389 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1018 09:15:41.795522  295389 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-880603 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-880603 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:15:41.795627  295389 ssh_runner.go:195] Run: crio config
	I1018 09:15:41.843775  295389 cni.go:84] Creating CNI manager for ""
	I1018 09:15:41.843801  295389 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:15:41.843825  295389 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 09:15:41.843847  295389 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-880603 NodeName:embed-certs-880603 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:15:41.843988  295389 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-880603"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:15:41.844048  295389 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 09:15:41.853671  295389 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:15:41.853744  295389 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:15:41.862858  295389 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1018 09:15:41.876963  295389 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:15:41.894051  295389 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1018 09:15:41.907831  295389 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1018 09:15:41.911850  295389 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:15:41.922821  295389 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	W1018 09:15:39.929245  285675 node_ready.go:57] node "no-preload-031066" has "Ready":"False" status (will retry)
	W1018 09:15:41.929817  285675 node_ready.go:57] node "no-preload-031066" has "Ready":"False" status (will retry)
	I1018 09:15:42.015006  295389 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:15:42.040920  295389 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603 for IP: 192.168.76.2
	I1018 09:15:42.040946  295389 certs.go:195] generating shared ca certs ...
	I1018 09:15:42.040969  295389 certs.go:227] acquiring lock for ca certs: {Name:mk550b60d986fbbdf7b5e0015c56234b739f3162 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:15:42.041123  295389 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-5897/.minikube/ca.key
	I1018 09:15:42.041159  295389 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.key
	I1018 09:15:42.041169  295389 certs.go:257] generating profile certs ...
	I1018 09:15:42.041229  295389 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/client.key
	I1018 09:15:42.041248  295389 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/client.crt with IP's: []
	I1018 09:15:42.348714  295389 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/client.crt ...
	I1018 09:15:42.348763  295389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/client.crt: {Name:mkfcbb26b0c0fddf2e62728597f176b171231f25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:15:42.348998  295389 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/client.key ...
	I1018 09:15:42.349021  295389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/client.key: {Name:mkf79e98db8fc4b219ddc41f01278546f024072c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:15:42.349152  295389 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/apiserver.key.d64b1fe7
	I1018 09:15:42.349177  295389 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/apiserver.crt.d64b1fe7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1018 09:15:43.054283  295389 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/apiserver.crt.d64b1fe7 ...
	I1018 09:15:43.054310  295389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/apiserver.crt.d64b1fe7: {Name:mk8d1335a0e1ace11ffdf1a21dc71f25fac69c93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:15:43.054517  295389 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/apiserver.key.d64b1fe7 ...
	I1018 09:15:43.054535  295389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/apiserver.key.d64b1fe7: {Name:mkbfd66b890f8dc243c5bdc50cbf46ac1edeb490 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:15:43.054628  295389 certs.go:382] copying /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/apiserver.crt.d64b1fe7 -> /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/apiserver.crt
	I1018 09:15:43.054706  295389 certs.go:386] copying /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/apiserver.key.d64b1fe7 -> /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/apiserver.key
	I1018 09:15:43.054763  295389 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/proxy-client.key
	I1018 09:15:43.054778  295389 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/proxy-client.crt with IP's: []
	I1018 09:15:43.200283  295389 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/proxy-client.crt ...
	I1018 09:15:43.200309  295389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/proxy-client.crt: {Name:mke17b12873a1f95776cc7750eb7023cc38f351c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:15:43.200497  295389 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/proxy-client.key ...
	I1018 09:15:43.200510  295389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/proxy-client.key: {Name:mk4a4481ee8ad8fff82932bca75d858c18666d4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:15:43.200691  295389 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/9394.pem (1338 bytes)
	W1018 09:15:43.200726  295389 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-5897/.minikube/certs/9394_empty.pem, impossibly tiny 0 bytes
	I1018 09:15:43.200736  295389 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 09:15:43.200755  295389 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:15:43.200776  295389 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:15:43.200797  295389 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem (1675 bytes)
	I1018 09:15:43.200833  295389 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem (1708 bytes)
	I1018 09:15:43.201338  295389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:15:43.221136  295389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 09:15:43.240631  295389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:15:43.259605  295389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 09:15:43.278725  295389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1018 09:15:43.297623  295389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 09:15:43.317184  295389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:15:43.336726  295389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 09:15:43.357452  295389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/certs/9394.pem --> /usr/share/ca-certificates/9394.pem (1338 bytes)
	I1018 09:15:43.377895  295389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem --> /usr/share/ca-certificates/93942.pem (1708 bytes)
	I1018 09:15:43.397640  295389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:15:43.417982  295389 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:15:43.432495  295389 ssh_runner.go:195] Run: openssl version
	I1018 09:15:43.439052  295389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9394.pem && ln -fs /usr/share/ca-certificates/9394.pem /etc/ssl/certs/9394.pem"
	I1018 09:15:43.448566  295389 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9394.pem
	I1018 09:15:43.453009  295389 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 08:35 /usr/share/ca-certificates/9394.pem
	I1018 09:15:43.453082  295389 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9394.pem
	I1018 09:15:43.489084  295389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9394.pem /etc/ssl/certs/51391683.0"
	I1018 09:15:43.498797  295389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93942.pem && ln -fs /usr/share/ca-certificates/93942.pem /etc/ssl/certs/93942.pem"
	I1018 09:15:43.509163  295389 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93942.pem
	I1018 09:15:43.514217  295389 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 08:35 /usr/share/ca-certificates/93942.pem
	I1018 09:15:43.514284  295389 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93942.pem
	I1018 09:15:43.554285  295389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93942.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:15:43.563761  295389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:15:43.573614  295389 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:15:43.577928  295389 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:15:43.577995  295389 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:15:43.614369  295389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:15:43.624275  295389 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:15:43.628386  295389 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 09:15:43.628453  295389 kubeadm.go:400] StartCluster: {Name:embed-certs-880603 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-880603 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:15:43.628527  295389 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:15:43.628592  295389 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:15:43.657012  295389 cri.go:89] found id: ""
	I1018 09:15:43.657088  295389 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:15:43.666493  295389 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 09:15:43.675282  295389 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 09:15:43.675366  295389 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 09:15:43.684084  295389 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 09:15:43.684105  295389 kubeadm.go:157] found existing configuration files:
	
	I1018 09:15:43.684170  295389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 09:15:43.692972  295389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 09:15:43.693027  295389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 09:15:43.701505  295389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 09:15:43.710206  295389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 09:15:43.710272  295389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 09:15:43.718731  295389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 09:15:43.727759  295389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 09:15:43.727820  295389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 09:15:43.736019  295389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 09:15:43.744393  295389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 09:15:43.744463  295389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 09:15:43.752571  295389 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 09:15:43.793524  295389 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 09:15:43.793597  295389 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 09:15:43.817251  295389 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 09:15:43.817432  295389 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1018 09:15:43.817490  295389 kubeadm.go:318] OS: Linux
	I1018 09:15:43.817552  295389 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 09:15:43.817647  295389 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 09:15:43.817730  295389 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 09:15:43.817798  295389 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 09:15:43.817872  295389 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 09:15:43.817959  295389 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 09:15:43.818026  295389 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 09:15:43.818093  295389 kubeadm.go:318] CGROUPS_IO: enabled
	I1018 09:15:43.891827  295389 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 09:15:43.891982  295389 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 09:15:43.892123  295389 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 09:15:43.902044  295389 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1018 09:15:42.355329  275240 pod_ready.go:104] pod "coredns-66bc5c9577-mvszb" is not "Ready", error: <nil>
	I1018 09:15:43.854876  275240 pod_ready.go:94] pod "coredns-66bc5c9577-mvszb" is "Ready"
	I1018 09:15:43.854908  275240 pod_ready.go:86] duration metric: took 37.506231953s for pod "coredns-66bc5c9577-mvszb" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:15:43.854921  275240 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-sv2l8" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:15:43.857076  275240 pod_ready.go:99] pod "coredns-66bc5c9577-sv2l8" in "kube-system" namespace is gone: getting pod "coredns-66bc5c9577-sv2l8" in "kube-system" namespace (will retry): pods "coredns-66bc5c9577-sv2l8" not found
	I1018 09:15:43.857102  275240 pod_ready.go:86] duration metric: took 2.173408ms for pod "coredns-66bc5c9577-sv2l8" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:15:43.860097  275240 pod_ready.go:83] waiting for pod "etcd-enable-default-cni-448954" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:15:43.864943  275240 pod_ready.go:94] pod "etcd-enable-default-cni-448954" is "Ready"
	I1018 09:15:43.864969  275240 pod_ready.go:86] duration metric: took 4.845664ms for pod "etcd-enable-default-cni-448954" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:15:43.867276  275240 pod_ready.go:83] waiting for pod "kube-apiserver-enable-default-cni-448954" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:15:43.871790  275240 pod_ready.go:94] pod "kube-apiserver-enable-default-cni-448954" is "Ready"
	I1018 09:15:43.871811  275240 pod_ready.go:86] duration metric: took 4.513529ms for pod "kube-apiserver-enable-default-cni-448954" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:15:43.874029  275240 pod_ready.go:83] waiting for pod "kube-controller-manager-enable-default-cni-448954" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:15:44.253454  275240 pod_ready.go:94] pod "kube-controller-manager-enable-default-cni-448954" is "Ready"
	I1018 09:15:44.253479  275240 pod_ready.go:86] duration metric: took 379.427235ms for pod "kube-controller-manager-enable-default-cni-448954" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:15:44.452416  275240 pod_ready.go:83] waiting for pod "kube-proxy-6sbvw" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:15:44.853062  275240 pod_ready.go:94] pod "kube-proxy-6sbvw" is "Ready"
	I1018 09:15:44.853093  275240 pod_ready.go:86] duration metric: took 400.649312ms for pod "kube-proxy-6sbvw" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:15:45.054092  275240 pod_ready.go:83] waiting for pod "kube-scheduler-enable-default-cni-448954" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:15:45.452397  275240 pod_ready.go:94] pod "kube-scheduler-enable-default-cni-448954" is "Ready"
	I1018 09:15:45.452423  275240 pod_ready.go:86] duration metric: took 398.304908ms for pod "kube-scheduler-enable-default-cni-448954" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:15:45.452438  275240 pod_ready.go:40] duration metric: took 39.108344409s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:15:45.511510  275240 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 09:15:45.515774  275240 out.go:179] * Done! kubectl is now configured to use "enable-default-cni-448954" cluster and "default" namespace by default
	W1018 09:15:45.529767  275240 root.go:91] failed to log command end to audit: failed to find a log row with id equals to 79889207-65e1-419a-a97a-da076106332c
	I1018 09:15:43.904526  295389 out.go:252]   - Generating certificates and keys ...
	I1018 09:15:43.904634  295389 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 09:15:43.904741  295389 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 09:15:44.262942  295389 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 09:15:44.625955  295389 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 09:15:44.832232  295389 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 09:15:44.861152  295389 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 09:15:45.076170  295389 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 09:15:45.076367  295389 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [embed-certs-880603 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1018 09:15:45.374578  295389 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 09:15:45.374807  295389 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-880603 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1018 09:15:45.743708  295389 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 09:15:46.030227  295389 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 09:15:46.326662  295389 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 09:15:46.326780  295389 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 09:15:47.026896  295389 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 09:15:47.403069  295389 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 09:15:47.603762  295389 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 09:15:48.294843  295389 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 09:15:48.783959  295389 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 09:15:48.784800  295389 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 09:15:48.789197  295389 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1018 09:15:44.428439  285675 node_ready.go:57] node "no-preload-031066" has "Ready":"False" status (will retry)
	W1018 09:15:46.429314  285675 node_ready.go:57] node "no-preload-031066" has "Ready":"False" status (will retry)
	I1018 09:15:48.791768  295389 out.go:252]   - Booting up control plane ...
	I1018 09:15:48.791860  295389 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 09:15:48.791953  295389 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 09:15:48.792287  295389 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 09:15:48.826058  295389 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 09:15:48.826330  295389 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 09:15:48.834696  295389 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 09:15:48.834935  295389 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 09:15:48.835073  295389 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 09:15:48.942386  295389 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 09:15:48.942559  295389 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 09:15:49.444490  295389 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 502.110023ms
	I1018 09:15:49.448733  295389 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 09:15:49.448866  295389 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1018 09:15:49.449014  295389 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 09:15:49.449135  295389 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 09:15:51.862452  295389 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.41362725s
	W1018 09:15:48.928738  285675 node_ready.go:57] node "no-preload-031066" has "Ready":"False" status (will retry)
	I1018 09:15:50.932250  285675 node_ready.go:49] node "no-preload-031066" is "Ready"
	I1018 09:15:50.932277  285675 node_ready.go:38] duration metric: took 13.006716002s for node "no-preload-031066" to be "Ready" ...
	I1018 09:15:50.932292  285675 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:15:50.932336  285675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:15:50.950195  285675 api_server.go:72] duration metric: took 13.405168985s to wait for apiserver process to appear ...
	I1018 09:15:50.950225  285675 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:15:50.950244  285675 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:15:50.955251  285675 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1018 09:15:50.956596  285675 api_server.go:141] control plane version: v1.34.1
	I1018 09:15:50.956627  285675 api_server.go:131] duration metric: took 6.394016ms to wait for apiserver health ...
	I1018 09:15:50.956638  285675 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:15:50.961211  285675 system_pods.go:59] 8 kube-system pods found
	I1018 09:15:50.961253  285675 system_pods.go:61] "coredns-66bc5c9577-h44wj" [0f9ac8bf-4d8f-489f-a5bb-f8ef2d832a89] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:15:50.961260  285675 system_pods.go:61] "etcd-no-preload-031066" [46ee9eac-4087-442e-855b-50a8b65b06df] Running
	I1018 09:15:50.961268  285675 system_pods.go:61] "kindnet-k7m9t" [08c34b72-06a7-4a73-b703-ce61dbf3a37f] Running
	I1018 09:15:50.961273  285675 system_pods.go:61] "kube-apiserver-no-preload-031066" [7b20717e-d3b8-4f72-9c92-04c74b236964] Running
	I1018 09:15:50.961278  285675 system_pods.go:61] "kube-controller-manager-no-preload-031066" [e8145322-b25f-40ec-aa8f-39b64900226c] Running
	I1018 09:15:50.961282  285675 system_pods.go:61] "kube-proxy-jr5qn" [1ae92f3f-9c07-4fb0-8334-549bfd4cac76] Running
	I1018 09:15:50.961291  285675 system_pods.go:61] "kube-scheduler-no-preload-031066" [2d6fcc42-b0a0-46d3-8eb1-7408eadd4dc6] Running
	I1018 09:15:50.961298  285675 system_pods.go:61] "storage-provisioner" [5b3e8950-c8a2-4205-b3aa-5c48157fc9d1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:15:50.961306  285675 system_pods.go:74] duration metric: took 4.661333ms to wait for pod list to return data ...
	I1018 09:15:50.961383  285675 default_sa.go:34] waiting for default service account to be created ...
	I1018 09:15:50.964823  285675 default_sa.go:45] found service account: "default"
	I1018 09:15:50.964850  285675 default_sa.go:55] duration metric: took 3.456701ms for default service account to be created ...
	I1018 09:15:50.964862  285675 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 09:15:50.968647  285675 system_pods.go:86] 8 kube-system pods found
	I1018 09:15:50.968693  285675 system_pods.go:89] "coredns-66bc5c9577-h44wj" [0f9ac8bf-4d8f-489f-a5bb-f8ef2d832a89] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:15:50.968703  285675 system_pods.go:89] "etcd-no-preload-031066" [46ee9eac-4087-442e-855b-50a8b65b06df] Running
	I1018 09:15:50.968712  285675 system_pods.go:89] "kindnet-k7m9t" [08c34b72-06a7-4a73-b703-ce61dbf3a37f] Running
	I1018 09:15:50.968718  285675 system_pods.go:89] "kube-apiserver-no-preload-031066" [7b20717e-d3b8-4f72-9c92-04c74b236964] Running
	I1018 09:15:50.968724  285675 system_pods.go:89] "kube-controller-manager-no-preload-031066" [e8145322-b25f-40ec-aa8f-39b64900226c] Running
	I1018 09:15:50.968736  285675 system_pods.go:89] "kube-proxy-jr5qn" [1ae92f3f-9c07-4fb0-8334-549bfd4cac76] Running
	I1018 09:15:50.968746  285675 system_pods.go:89] "kube-scheduler-no-preload-031066" [2d6fcc42-b0a0-46d3-8eb1-7408eadd4dc6] Running
	I1018 09:15:50.968757  285675 system_pods.go:89] "storage-provisioner" [5b3e8950-c8a2-4205-b3aa-5c48157fc9d1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:15:50.968782  285675 retry.go:31] will retry after 289.043423ms: missing components: kube-dns
	I1018 09:15:51.262441  285675 system_pods.go:86] 8 kube-system pods found
	I1018 09:15:51.262485  285675 system_pods.go:89] "coredns-66bc5c9577-h44wj" [0f9ac8bf-4d8f-489f-a5bb-f8ef2d832a89] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:15:51.262495  285675 system_pods.go:89] "etcd-no-preload-031066" [46ee9eac-4087-442e-855b-50a8b65b06df] Running
	I1018 09:15:51.262504  285675 system_pods.go:89] "kindnet-k7m9t" [08c34b72-06a7-4a73-b703-ce61dbf3a37f] Running
	I1018 09:15:51.262510  285675 system_pods.go:89] "kube-apiserver-no-preload-031066" [7b20717e-d3b8-4f72-9c92-04c74b236964] Running
	I1018 09:15:51.262516  285675 system_pods.go:89] "kube-controller-manager-no-preload-031066" [e8145322-b25f-40ec-aa8f-39b64900226c] Running
	I1018 09:15:51.262521  285675 system_pods.go:89] "kube-proxy-jr5qn" [1ae92f3f-9c07-4fb0-8334-549bfd4cac76] Running
	I1018 09:15:51.262526  285675 system_pods.go:89] "kube-scheduler-no-preload-031066" [2d6fcc42-b0a0-46d3-8eb1-7408eadd4dc6] Running
	I1018 09:15:51.262534  285675 system_pods.go:89] "storage-provisioner" [5b3e8950-c8a2-4205-b3aa-5c48157fc9d1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:15:51.262553  285675 retry.go:31] will retry after 353.253527ms: missing components: kube-dns
	I1018 09:15:51.621042  285675 system_pods.go:86] 8 kube-system pods found
	I1018 09:15:51.621077  285675 system_pods.go:89] "coredns-66bc5c9577-h44wj" [0f9ac8bf-4d8f-489f-a5bb-f8ef2d832a89] Running
	I1018 09:15:51.621085  285675 system_pods.go:89] "etcd-no-preload-031066" [46ee9eac-4087-442e-855b-50a8b65b06df] Running
	I1018 09:15:51.621090  285675 system_pods.go:89] "kindnet-k7m9t" [08c34b72-06a7-4a73-b703-ce61dbf3a37f] Running
	I1018 09:15:51.621096  285675 system_pods.go:89] "kube-apiserver-no-preload-031066" [7b20717e-d3b8-4f72-9c92-04c74b236964] Running
	I1018 09:15:51.621102  285675 system_pods.go:89] "kube-controller-manager-no-preload-031066" [e8145322-b25f-40ec-aa8f-39b64900226c] Running
	I1018 09:15:51.621107  285675 system_pods.go:89] "kube-proxy-jr5qn" [1ae92f3f-9c07-4fb0-8334-549bfd4cac76] Running
	I1018 09:15:51.621112  285675 system_pods.go:89] "kube-scheduler-no-preload-031066" [2d6fcc42-b0a0-46d3-8eb1-7408eadd4dc6] Running
	I1018 09:15:51.621116  285675 system_pods.go:89] "storage-provisioner" [5b3e8950-c8a2-4205-b3aa-5c48157fc9d1] Running
	I1018 09:15:51.621127  285675 system_pods.go:126] duration metric: took 656.258179ms to wait for k8s-apps to be running ...
	I1018 09:15:51.621137  285675 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 09:15:51.621202  285675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:15:51.644819  285675 system_svc.go:56] duration metric: took 23.675205ms WaitForService to wait for kubelet
	I1018 09:15:51.644855  285675 kubeadm.go:586] duration metric: took 14.099836921s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:15:51.644881  285675 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:15:51.648258  285675 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 09:15:51.648291  285675 node_conditions.go:123] node cpu capacity is 8
	I1018 09:15:51.648305  285675 node_conditions.go:105] duration metric: took 3.418317ms to run NodePressure ...
	I1018 09:15:51.648320  285675 start.go:241] waiting for startup goroutines ...
	I1018 09:15:51.648330  285675 start.go:246] waiting for cluster config update ...
	I1018 09:15:51.648364  285675 start.go:255] writing updated cluster config ...
	I1018 09:15:51.648694  285675 ssh_runner.go:195] Run: rm -f paused
	I1018 09:15:51.653502  285675 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:15:51.657929  285675 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-h44wj" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:15:51.663103  285675 pod_ready.go:94] pod "coredns-66bc5c9577-h44wj" is "Ready"
	I1018 09:15:51.663134  285675 pod_ready.go:86] duration metric: took 5.179066ms for pod "coredns-66bc5c9577-h44wj" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:15:51.665563  285675 pod_ready.go:83] waiting for pod "etcd-no-preload-031066" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:15:51.670119  285675 pod_ready.go:94] pod "etcd-no-preload-031066" is "Ready"
	I1018 09:15:51.670145  285675 pod_ready.go:86] duration metric: took 4.558659ms for pod "etcd-no-preload-031066" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:15:51.672245  285675 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-031066" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:15:51.677008  285675 pod_ready.go:94] pod "kube-apiserver-no-preload-031066" is "Ready"
	I1018 09:15:51.677038  285675 pod_ready.go:86] duration metric: took 4.762826ms for pod "kube-apiserver-no-preload-031066" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:15:51.679528  285675 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-031066" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:15:52.058733  285675 pod_ready.go:94] pod "kube-controller-manager-no-preload-031066" is "Ready"
	I1018 09:15:52.058762  285675 pod_ready.go:86] duration metric: took 379.204799ms for pod "kube-controller-manager-no-preload-031066" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:15:52.257987  285675 pod_ready.go:83] waiting for pod "kube-proxy-jr5qn" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:15:52.658112  285675 pod_ready.go:94] pod "kube-proxy-jr5qn" is "Ready"
	I1018 09:15:52.658136  285675 pod_ready.go:86] duration metric: took 400.125695ms for pod "kube-proxy-jr5qn" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:15:52.858841  285675 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-031066" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:15:53.258538  285675 pod_ready.go:94] pod "kube-scheduler-no-preload-031066" is "Ready"
	I1018 09:15:53.258566  285675 pod_ready.go:86] duration metric: took 399.699089ms for pod "kube-scheduler-no-preload-031066" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:15:53.258586  285675 pod_ready.go:40] duration metric: took 1.605044456s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:15:53.313306  285675 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 09:15:53.314973  285675 out.go:179] * Done! kubectl is now configured to use "no-preload-031066" cluster and "default" namespace by default
	I1018 09:15:52.168913  295389 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.720085859s
	I1018 09:15:53.951122  295389 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.502285677s
	I1018 09:15:53.962507  295389 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 09:15:53.976078  295389 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 09:15:53.986640  295389 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 09:15:53.986829  295389 kubeadm.go:318] [mark-control-plane] Marking the node embed-certs-880603 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 09:15:53.996434  295389 kubeadm.go:318] [bootstrap-token] Using token: ovs04s.watlstv91k3hv5mo
	I1018 09:15:53.997737  295389 out.go:252]   - Configuring RBAC rules ...
	I1018 09:15:53.997900  295389 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 09:15:54.002021  295389 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 09:15:54.008984  295389 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 09:15:54.012061  295389 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 09:15:54.016227  295389 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 09:15:54.022168  295389 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 09:15:54.359531  295389 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 09:15:54.775814  295389 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 09:15:55.357377  295389 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 09:15:55.358316  295389 kubeadm.go:318] 
	I1018 09:15:55.358413  295389 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 09:15:55.358423  295389 kubeadm.go:318] 
	I1018 09:15:55.358519  295389 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 09:15:55.358536  295389 kubeadm.go:318] 
	I1018 09:15:55.358576  295389 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 09:15:55.358646  295389 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 09:15:55.358692  295389 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 09:15:55.358699  295389 kubeadm.go:318] 
	I1018 09:15:55.358752  295389 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 09:15:55.358758  295389 kubeadm.go:318] 
	I1018 09:15:55.358798  295389 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 09:15:55.358804  295389 kubeadm.go:318] 
	I1018 09:15:55.358879  295389 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 09:15:55.358959  295389 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 09:15:55.359019  295389 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 09:15:55.359025  295389 kubeadm.go:318] 
	I1018 09:15:55.359119  295389 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 09:15:55.359195  295389 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 09:15:55.359204  295389 kubeadm.go:318] 
	I1018 09:15:55.359285  295389 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token ovs04s.watlstv91k3hv5mo \
	I1018 09:15:55.359472  295389 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:03f732b5d900f8eb7de41cf71a6356f3c4edf03d7a3795a959179e2391e7734f \
	I1018 09:15:55.359509  295389 kubeadm.go:318] 	--control-plane 
	I1018 09:15:55.359519  295389 kubeadm.go:318] 
	I1018 09:15:55.359658  295389 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 09:15:55.359673  295389 kubeadm.go:318] 
	I1018 09:15:55.359793  295389 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token ovs04s.watlstv91k3hv5mo \
	I1018 09:15:55.359886  295389 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:03f732b5d900f8eb7de41cf71a6356f3c4edf03d7a3795a959179e2391e7734f 
	I1018 09:15:55.363165  295389 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1018 09:15:55.363330  295389 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 09:15:55.363370  295389 cni.go:84] Creating CNI manager for ""
	I1018 09:15:55.363379  295389 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:15:55.365868  295389 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 09:15:55.367078  295389 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 09:15:55.372247  295389 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 09:15:55.372268  295389 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 09:15:55.387696  295389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 09:15:55.618905  295389 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 09:15:55.619015  295389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:15:55.619056  295389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-880603 minikube.k8s.io/updated_at=2025_10_18T09_15_55_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2a39cecdc22b5fb611b15c7501c7459c3b4d2820 minikube.k8s.io/name=embed-certs-880603 minikube.k8s.io/primary=true
	I1018 09:15:55.629285  295389 ops.go:34] apiserver oom_adj: -16
	I1018 09:15:55.695669  295389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:15:56.196095  295389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:15:56.696013  295389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:15:57.195859  295389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:15:57.695723  295389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:15:58.196538  295389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:15:58.696561  295389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:15:59.196260  295389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:15:59.696167  295389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:15:59.776120  295389 kubeadm.go:1113] duration metric: took 4.157176048s to wait for elevateKubeSystemPrivileges
	I1018 09:15:59.776157  295389 kubeadm.go:402] duration metric: took 16.147711712s to StartCluster
	I1018 09:15:59.776179  295389 settings.go:142] acquiring lock: {Name:mk177870d6cf7000f95346d8b9c104ade730278a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:15:59.776259  295389 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-5897/kubeconfig
	I1018 09:15:59.778453  295389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/kubeconfig: {Name:mkbdf22e0f6c3f9f36e4cde3352b620d43ace448 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:15:59.778769  295389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 09:15:59.778800  295389 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:15:59.778867  295389 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 09:15:59.778960  295389 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-880603"
	I1018 09:15:59.778978  295389 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-880603"
	I1018 09:15:59.779009  295389 host.go:66] Checking if "embed-certs-880603" exists ...
	I1018 09:15:59.779010  295389 addons.go:69] Setting default-storageclass=true in profile "embed-certs-880603"
	I1018 09:15:59.779042  295389 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-880603"
	I1018 09:15:59.779127  295389 config.go:182] Loaded profile config "embed-certs-880603": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:15:59.779560  295389 cli_runner.go:164] Run: docker container inspect embed-certs-880603 --format={{.State.Status}}
	I1018 09:15:59.779594  295389 cli_runner.go:164] Run: docker container inspect embed-certs-880603 --format={{.State.Status}}
	I1018 09:15:59.783861  295389 out.go:179] * Verifying Kubernetes components...
	I1018 09:15:59.786003  295389 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:15:59.808218  295389 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:15:59.809521  295389 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:15:59.809544  295389 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 09:15:59.809610  295389 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-880603
	I1018 09:15:59.811261  295389 addons.go:238] Setting addon default-storageclass=true in "embed-certs-880603"
	I1018 09:15:59.811530  295389 host.go:66] Checking if "embed-certs-880603" exists ...
	I1018 09:15:59.812313  295389 cli_runner.go:164] Run: docker container inspect embed-certs-880603 --format={{.State.Status}}
	I1018 09:15:59.845699  295389 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/embed-certs-880603/id_rsa Username:docker}
	I1018 09:15:59.850971  295389 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 09:15:59.850993  295389 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 09:15:59.851057  295389 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-880603
	I1018 09:15:59.880777  295389 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/embed-certs-880603/id_rsa Username:docker}
	I1018 09:15:59.900788  295389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 09:15:59.952257  295389 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:15:59.976980  295389 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:15:59.999760  295389 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 09:16:00.106236  295389 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1018 09:16:00.107974  295389 node_ready.go:35] waiting up to 6m0s for node "embed-certs-880603" to be "Ready" ...
	I1018 09:16:00.369994  295389 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	
	
	==> CRI-O <==
	Oct 18 09:15:50 no-preload-031066 crio[768]: time="2025-10-18T09:15:50.976576104Z" level=info msg="Starting container: a66e2a88b8189dcb310fa177e178fa602b38e1cc448eb0bbe958bbab527d055a" id=e184f5ff-7804-4014-9f13-2c9a4d95fa2b name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:15:50 no-preload-031066 crio[768]: time="2025-10-18T09:15:50.978438334Z" level=info msg="Started container" PID=2885 containerID=a66e2a88b8189dcb310fa177e178fa602b38e1cc448eb0bbe958bbab527d055a description=kube-system/coredns-66bc5c9577-h44wj/coredns id=e184f5ff-7804-4014-9f13-2c9a4d95fa2b name=/runtime.v1.RuntimeService/StartContainer sandboxID=a060360e4e745cb17a6a5a276093c83744af6ca034b70682dc5e6e15e2dd3086
	Oct 18 09:15:53 no-preload-031066 crio[768]: time="2025-10-18T09:15:53.790665614Z" level=info msg="Running pod sandbox: default/busybox/POD" id=268f3f84-c478-4305-9ef4-c5e038775c6a name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:15:53 no-preload-031066 crio[768]: time="2025-10-18T09:15:53.790791808Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:15:53 no-preload-031066 crio[768]: time="2025-10-18T09:15:53.796821131Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:47a730ddebe4359e79a9c6eb6b0b37570713f95ab8e7179ae38ec9325f67009c UID:f45fe433-dd70-4ef8-86fc-49c43f3e3c71 NetNS:/var/run/netns/9a10f15e-b0d0-431d-be61-d57d08e79fa0 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000988350}] Aliases:map[]}"
	Oct 18 09:15:53 no-preload-031066 crio[768]: time="2025-10-18T09:15:53.796863402Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 18 09:15:53 no-preload-031066 crio[768]: time="2025-10-18T09:15:53.80659519Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:47a730ddebe4359e79a9c6eb6b0b37570713f95ab8e7179ae38ec9325f67009c UID:f45fe433-dd70-4ef8-86fc-49c43f3e3c71 NetNS:/var/run/netns/9a10f15e-b0d0-431d-be61-d57d08e79fa0 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000988350}] Aliases:map[]}"
	Oct 18 09:15:53 no-preload-031066 crio[768]: time="2025-10-18T09:15:53.806745192Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 18 09:15:53 no-preload-031066 crio[768]: time="2025-10-18T09:15:53.807544653Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 18 09:15:53 no-preload-031066 crio[768]: time="2025-10-18T09:15:53.808483165Z" level=info msg="Ran pod sandbox 47a730ddebe4359e79a9c6eb6b0b37570713f95ab8e7179ae38ec9325f67009c with infra container: default/busybox/POD" id=268f3f84-c478-4305-9ef4-c5e038775c6a name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:15:53 no-preload-031066 crio[768]: time="2025-10-18T09:15:53.809706911Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5471692f-126d-4fa2-acb0-7858bbc3ea92 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:15:53 no-preload-031066 crio[768]: time="2025-10-18T09:15:53.809808438Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=5471692f-126d-4fa2-acb0-7858bbc3ea92 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:15:53 no-preload-031066 crio[768]: time="2025-10-18T09:15:53.809837778Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=5471692f-126d-4fa2-acb0-7858bbc3ea92 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:15:53 no-preload-031066 crio[768]: time="2025-10-18T09:15:53.810479305Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=dc7f11f0-9a30-4443-918b-73911a98283d name=/runtime.v1.ImageService/PullImage
	Oct 18 09:15:53 no-preload-031066 crio[768]: time="2025-10-18T09:15:53.813326934Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 18 09:15:54 no-preload-031066 crio[768]: time="2025-10-18T09:15:54.526521876Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=dc7f11f0-9a30-4443-918b-73911a98283d name=/runtime.v1.ImageService/PullImage
	Oct 18 09:15:54 no-preload-031066 crio[768]: time="2025-10-18T09:15:54.527201546Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=acf92cea-6e51-49db-9de0-5cedb6cb0e40 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:15:54 no-preload-031066 crio[768]: time="2025-10-18T09:15:54.529010564Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=91fe95db-c51c-4eaf-8113-fc8493933483 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:15:54 no-preload-031066 crio[768]: time="2025-10-18T09:15:54.535735057Z" level=info msg="Creating container: default/busybox/busybox" id=40693dae-2664-4f35-972d-844eb7633d5f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:15:54 no-preload-031066 crio[768]: time="2025-10-18T09:15:54.536761974Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:15:54 no-preload-031066 crio[768]: time="2025-10-18T09:15:54.542820917Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:15:54 no-preload-031066 crio[768]: time="2025-10-18T09:15:54.543434139Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:15:54 no-preload-031066 crio[768]: time="2025-10-18T09:15:54.57592289Z" level=info msg="Created container bdde4bbe20ae272db11db019e58f9ad3227c4dbe79bfa0d1f437f29e5bfdc05f: default/busybox/busybox" id=40693dae-2664-4f35-972d-844eb7633d5f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:15:54 no-preload-031066 crio[768]: time="2025-10-18T09:15:54.577156426Z" level=info msg="Starting container: bdde4bbe20ae272db11db019e58f9ad3227c4dbe79bfa0d1f437f29e5bfdc05f" id=d00b7b58-a02d-4748-847d-7c29d5dae0cc name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:15:54 no-preload-031066 crio[768]: time="2025-10-18T09:15:54.579813213Z" level=info msg="Started container" PID=2955 containerID=bdde4bbe20ae272db11db019e58f9ad3227c4dbe79bfa0d1f437f29e5bfdc05f description=default/busybox/busybox id=d00b7b58-a02d-4748-847d-7c29d5dae0cc name=/runtime.v1.RuntimeService/StartContainer sandboxID=47a730ddebe4359e79a9c6eb6b0b37570713f95ab8e7179ae38ec9325f67009c
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	bdde4bbe20ae2       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   47a730ddebe43       busybox                                     default
	a66e2a88b8189       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      10 seconds ago      Running             coredns                   0                   a060360e4e745       coredns-66bc5c9577-h44wj                    kube-system
	235ee82ae0ba5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   8d97a75c303ea       storage-provisioner                         kube-system
	cca2039567ed3       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    22 seconds ago      Running             kindnet-cni               0                   6e4032057920b       kindnet-k7m9t                               kube-system
	7f3f35d24d77e       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      24 seconds ago      Running             kube-proxy                0                   cf961c221ce0f       kube-proxy-jr5qn                            kube-system
	ae2ff9f711871       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      34 seconds ago      Running             kube-controller-manager   0                   4320c50514b59       kube-controller-manager-no-preload-031066   kube-system
	bcfbbb7b2d27c       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      34 seconds ago      Running             kube-scheduler            0                   d86858519dfd3       kube-scheduler-no-preload-031066            kube-system
	43d944dee41bb       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      34 seconds ago      Running             etcd                      0                   afacd60813fab       etcd-no-preload-031066                      kube-system
	724800d470bcb       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      34 seconds ago      Running             kube-apiserver            0                   054c309b5db3f       kube-apiserver-no-preload-031066            kube-system
	
	
	==> coredns [a66e2a88b8189dcb310fa177e178fa602b38e1cc448eb0bbe958bbab527d055a] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34058 - 23236 "HINFO IN 8390831316571060688.3868787474599931158. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.069725546s
	
	
	==> describe nodes <==
	Name:               no-preload-031066
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-031066
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a39cecdc22b5fb611b15c7501c7459c3b4d2820
	                    minikube.k8s.io/name=no-preload-031066
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T09_15_33_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 09:15:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-031066
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 09:15:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 09:15:52 +0000   Sat, 18 Oct 2025 09:15:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 09:15:52 +0000   Sat, 18 Oct 2025 09:15:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 09:15:52 +0000   Sat, 18 Oct 2025 09:15:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 09:15:52 +0000   Sat, 18 Oct 2025 09:15:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-031066
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                01d62f53-a2fc-4f1d-88c2-abcb9799608b
	  Boot ID:                    e8d7ef1f-87bb-488c-8381-e18fe85b484f
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-h44wj                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-no-preload-031066                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-k7m9t                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-no-preload-031066             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-no-preload-031066    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-jr5qn                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-no-preload-031066             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 23s   kube-proxy       
	  Normal  Starting                 30s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s   kubelet          Node no-preload-031066 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s   kubelet          Node no-preload-031066 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s   kubelet          Node no-preload-031066 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s   node-controller  Node no-preload-031066 event: Registered Node no-preload-031066 in Controller
	  Normal  NodeReady                12s   kubelet          Node no-preload-031066 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3e 73 e0 1e 4f 5c 08 06
	[  +0.001176] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9e 01 6a be c1 ed 08 06
	[  +1.096145] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 92 07 d0 c5 bc 08 06
	[  +0.000393] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff de 8d 0a a3 cc 78 08 06
	[ +17.591772] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 8a 16 36 e8 43 c0 08 06
	[  +0.000413] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3e 73 e0 1e 4f 5c 08 06
	[ +11.820741] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 36 5b 8e 46 ea 47 08 06
	[Oct18 09:15] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 28 86 11 d4 e9 08 06
	[  +0.032974] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 76 2d 83 26 2e 28 08 06
	[  +4.435535] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 22 e2 07 5a 3b 4a 08 06
	[  +0.000382] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 36 5b 8e 46 ea 47 08 06
	[ +43.809014] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 22 6f 4b 2b 7f 46 08 06
	[  +0.000367] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 62 28 86 11 d4 e9 08 06
	
	
	==> etcd [43d944dee41bbe7d50eaa15c0ae2f8c42d4249f17d41b15f489a4c62efda2a30] <==
	{"level":"warn","ts":"2025-10-18T09:15:29.008453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:15:29.016423Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:15:29.025313Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:15:29.033248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:15:29.041839Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:15:29.049109Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:15:29.056726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:15:29.064021Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:15:29.072154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:15:29.083804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:15:29.090965Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:15:29.099167Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:15:29.157263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:15:35.915895Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"106.992035ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" limit:1 ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2025-10-18T09:15:35.915973Z","caller":"traceutil/trace.go:172","msg":"trace[1556401238] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:293; }","duration":"107.085608ms","start":"2025-10-18T09:15:35.808872Z","end":"2025-10-18T09:15:35.915958Z","steps":["trace[1556401238] 'range keys from in-memory index tree'  (duration: 106.839226ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:15:36.290978Z","caller":"traceutil/trace.go:172","msg":"trace[7526961] transaction","detail":"{read_only:false; response_revision:295; number_of_response:1; }","duration":"131.74311ms","start":"2025-10-18T09:15:36.159212Z","end":"2025-10-18T09:15:36.290955Z","steps":["trace[7526961] 'process raft request'  (duration: 131.595741ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:15:36.582333Z","caller":"traceutil/trace.go:172","msg":"trace[2043035723] transaction","detail":"{read_only:false; response_revision:296; number_of_response:1; }","duration":"120.729963ms","start":"2025-10-18T09:15:36.461581Z","end":"2025-10-18T09:15:36.582310Z","steps":["trace[2043035723] 'process raft request'  (duration: 120.616596ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:15:36.757136Z","caller":"traceutil/trace.go:172","msg":"trace[979564557] transaction","detail":"{read_only:false; response_revision:297; number_of_response:1; }","duration":"136.547772ms","start":"2025-10-18T09:15:36.620569Z","end":"2025-10-18T09:15:36.757117Z","steps":["trace[979564557] 'process raft request'  (duration: 136.259856ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:15:36.778678Z","caller":"traceutil/trace.go:172","msg":"trace[1281392909] transaction","detail":"{read_only:false; response_revision:298; number_of_response:1; }","duration":"155.753795ms","start":"2025-10-18T09:15:36.622897Z","end":"2025-10-18T09:15:36.778650Z","steps":["trace[1281392909] 'process raft request'  (duration: 155.554685ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:15:36.778695Z","caller":"traceutil/trace.go:172","msg":"trace[1472320282] transaction","detail":"{read_only:false; response_revision:299; number_of_response:1; }","duration":"154.760265ms","start":"2025-10-18T09:15:36.623924Z","end":"2025-10-18T09:15:36.778684Z","steps":["trace[1472320282] 'process raft request'  (duration: 154.61404ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:15:37.411991Z","caller":"traceutil/trace.go:172","msg":"trace[1360662556] transaction","detail":"{read_only:false; response_revision:307; number_of_response:1; }","duration":"101.37302ms","start":"2025-10-18T09:15:37.310596Z","end":"2025-10-18T09:15:37.411969Z","steps":["trace[1360662556] 'process raft request'  (duration: 81.127634ms)","trace[1360662556] 'compare'  (duration: 20.102111ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T09:15:37.439940Z","caller":"traceutil/trace.go:172","msg":"trace[780383896] transaction","detail":"{read_only:false; response_revision:309; number_of_response:1; }","duration":"128.690364ms","start":"2025-10-18T09:15:37.311227Z","end":"2025-10-18T09:15:37.439917Z","steps":["trace[780383896] 'process raft request'  (duration: 128.574753ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:15:37.439943Z","caller":"traceutil/trace.go:172","msg":"trace[1176745031] transaction","detail":"{read_only:false; response_revision:311; number_of_response:1; }","duration":"125.244545ms","start":"2025-10-18T09:15:37.314687Z","end":"2025-10-18T09:15:37.439931Z","steps":["trace[1176745031] 'process raft request'  (duration: 125.208761ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:15:37.439937Z","caller":"traceutil/trace.go:172","msg":"trace[231962556] transaction","detail":"{read_only:false; response_revision:308; number_of_response:1; }","duration":"128.719891ms","start":"2025-10-18T09:15:37.311188Z","end":"2025-10-18T09:15:37.439908Z","steps":["trace[231962556] 'process raft request'  (duration: 128.521089ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:15:37.439954Z","caller":"traceutil/trace.go:172","msg":"trace[2092837151] transaction","detail":"{read_only:false; response_revision:310; number_of_response:1; }","duration":"127.062533ms","start":"2025-10-18T09:15:37.312875Z","end":"2025-10-18T09:15:37.439938Z","steps":["trace[2092837151] 'process raft request'  (duration: 126.972414ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:16:02 up 58 min,  0 user,  load average: 3.69, 3.34, 2.26
	Linux no-preload-031066 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [cca2039567ed31c3d7de7878c44134fd0f7f19201f40a96b0932d64c60d70a46] <==
	I1018 09:15:39.914000       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 09:15:39.914272       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1018 09:15:39.987806       1 main.go:148] setting mtu 1500 for CNI 
	I1018 09:15:39.987840       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 09:15:39.987869       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T09:15:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 09:15:40.210711       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 09:15:40.210755       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 09:15:40.210768       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 09:15:40.211674       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 09:15:40.510974       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 09:15:40.511008       1 metrics.go:72] Registering metrics
	I1018 09:15:40.511092       1 controller.go:711] "Syncing nftables rules"
	I1018 09:15:50.216473       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 09:15:50.216535       1 main.go:301] handling current node
	I1018 09:16:00.214871       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 09:16:00.214916       1 main.go:301] handling current node
	
	
	==> kube-apiserver [724800d470bcb41e21364cde6afb9154332272325182f156b6afb602e066fd89] <==
	I1018 09:15:29.699600       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1018 09:15:29.699649       1 aggregator.go:171] initial CRD sync complete...
	I1018 09:15:29.699658       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 09:15:29.699664       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 09:15:29.699671       1 cache.go:39] Caches are synced for autoregister controller
	I1018 09:15:29.700396       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1018 09:15:29.868175       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 09:15:30.588377       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1018 09:15:30.593795       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1018 09:15:30.593811       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 09:15:31.171016       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 09:15:31.217196       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 09:15:31.293061       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1018 09:15:31.299964       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1018 09:15:31.301374       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 09:15:31.306332       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 09:15:31.611853       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 09:15:32.452739       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 09:15:32.468089       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1018 09:15:32.478839       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 09:15:37.214492       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1018 09:15:37.445976       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 09:15:37.587093       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:15:37.601607       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1018 09:16:00.602000       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:53624: use of closed network connection
	
	
	==> kube-controller-manager [ae2ff9f7118719d93a7806a2c7f6a3515a9fdfcd3980ff7cdd250dc8e819353d] <==
	I1018 09:15:36.663087       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 09:15:36.664433       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 09:15:36.664541       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1018 09:15:36.668816       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 09:15:36.673218       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 09:15:36.673234       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 09:15:36.673241       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 09:15:36.680546       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 09:15:36.680553       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 09:15:36.680671       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 09:15:36.681927       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 09:15:36.684126       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1018 09:15:36.687278       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1018 09:15:36.695667       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1018 09:15:36.698021       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:15:36.711451       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 09:15:36.711451       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 09:15:36.711514       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 09:15:36.711590       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 09:15:36.711666       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 09:15:36.712054       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 09:15:36.712078       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 09:15:36.712142       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1018 09:15:36.758522       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-031066" podCIDRs=["10.244.0.0/24"]
	I1018 09:15:51.622880       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [7f3f35d24d77ee8cb65c7fbdd5b5199f580a1847d40592612e1a7c3aa763c6b5] <==
	I1018 09:15:37.867174       1 server_linux.go:53] "Using iptables proxy"
	I1018 09:15:37.947933       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 09:15:38.048632       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 09:15:38.048670       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1018 09:15:38.048769       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 09:15:38.071327       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 09:15:38.071400       1 server_linux.go:132] "Using iptables Proxier"
	I1018 09:15:38.079099       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 09:15:38.079739       1 server.go:527] "Version info" version="v1.34.1"
	I1018 09:15:38.079788       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:15:38.081372       1 config.go:200] "Starting service config controller"
	I1018 09:15:38.081397       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 09:15:38.081599       1 config.go:106] "Starting endpoint slice config controller"
	I1018 09:15:38.081625       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 09:15:38.081943       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 09:15:38.081981       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 09:15:38.082856       1 config.go:309] "Starting node config controller"
	I1018 09:15:38.082878       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 09:15:38.182028       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 09:15:38.182038       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 09:15:38.182080       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 09:15:38.183405       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [bcfbbb7b2d27c4e10ef167faff400d9d62cc1cba1ce0cbf1a683b15853fd89cc] <==
	E1018 09:15:29.635091       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 09:15:29.635646       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 09:15:29.635836       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 09:15:29.635972       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 09:15:29.636540       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 09:15:29.636030       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 09:15:29.636120       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 09:15:29.636111       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 09:15:29.636170       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 09:15:29.636840       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 09:15:29.636075       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 09:15:29.636928       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 09:15:29.636191       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 09:15:30.445022       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 09:15:30.449318       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 09:15:30.473754       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 09:15:30.483243       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 09:15:30.504730       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 09:15:30.554498       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 09:15:30.721556       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 09:15:30.822393       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 09:15:30.934031       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 09:15:30.950613       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 09:15:31.004538       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1018 09:15:33.531430       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 09:15:33 no-preload-031066 kubelet[2284]: E1018 09:15:33.369429    2284 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-no-preload-031066\" already exists" pod="kube-system/kube-controller-manager-no-preload-031066"
	Oct 18 09:15:33 no-preload-031066 kubelet[2284]: E1018 09:15:33.370948    2284 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-no-preload-031066\" already exists" pod="kube-system/kube-scheduler-no-preload-031066"
	Oct 18 09:15:33 no-preload-031066 kubelet[2284]: I1018 09:15:33.384245    2284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-no-preload-031066" podStartSLOduration=2.3842232230000002 podStartE2EDuration="2.384223223s" podCreationTimestamp="2025-10-18 09:15:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:15:33.372773225 +0000 UTC m=+1.145319642" watchObservedRunningTime="2025-10-18 09:15:33.384223223 +0000 UTC m=+1.156769640"
	Oct 18 09:15:33 no-preload-031066 kubelet[2284]: I1018 09:15:33.401662    2284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-031066" podStartSLOduration=1.401639854 podStartE2EDuration="1.401639854s" podCreationTimestamp="2025-10-18 09:15:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:15:33.384651993 +0000 UTC m=+1.157198415" watchObservedRunningTime="2025-10-18 09:15:33.401639854 +0000 UTC m=+1.174186266"
	Oct 18 09:15:36 no-preload-031066 kubelet[2284]: I1018 09:15:36.857197    2284 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 18 09:15:36 no-preload-031066 kubelet[2284]: I1018 09:15:36.857922    2284 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 18 09:15:37 no-preload-031066 kubelet[2284]: I1018 09:15:37.444218    2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1ae92f3f-9c07-4fb0-8334-549bfd4cac76-kube-proxy\") pod \"kube-proxy-jr5qn\" (UID: \"1ae92f3f-9c07-4fb0-8334-549bfd4cac76\") " pod="kube-system/kube-proxy-jr5qn"
	Oct 18 09:15:37 no-preload-031066 kubelet[2284]: I1018 09:15:37.444316    2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76xft\" (UniqueName: \"kubernetes.io/projected/1ae92f3f-9c07-4fb0-8334-549bfd4cac76-kube-api-access-76xft\") pod \"kube-proxy-jr5qn\" (UID: \"1ae92f3f-9c07-4fb0-8334-549bfd4cac76\") " pod="kube-system/kube-proxy-jr5qn"
	Oct 18 09:15:37 no-preload-031066 kubelet[2284]: I1018 09:15:37.444384    2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1ae92f3f-9c07-4fb0-8334-549bfd4cac76-xtables-lock\") pod \"kube-proxy-jr5qn\" (UID: \"1ae92f3f-9c07-4fb0-8334-549bfd4cac76\") " pod="kube-system/kube-proxy-jr5qn"
	Oct 18 09:15:37 no-preload-031066 kubelet[2284]: I1018 09:15:37.444408    2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1ae92f3f-9c07-4fb0-8334-549bfd4cac76-lib-modules\") pod \"kube-proxy-jr5qn\" (UID: \"1ae92f3f-9c07-4fb0-8334-549bfd4cac76\") " pod="kube-system/kube-proxy-jr5qn"
	Oct 18 09:15:37 no-preload-031066 kubelet[2284]: I1018 09:15:37.544780    2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98cvv\" (UniqueName: \"kubernetes.io/projected/08c34b72-06a7-4a73-b703-ce61dbf3a37f-kube-api-access-98cvv\") pod \"kindnet-k7m9t\" (UID: \"08c34b72-06a7-4a73-b703-ce61dbf3a37f\") " pod="kube-system/kindnet-k7m9t"
	Oct 18 09:15:37 no-preload-031066 kubelet[2284]: I1018 09:15:37.544865    2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/08c34b72-06a7-4a73-b703-ce61dbf3a37f-cni-cfg\") pod \"kindnet-k7m9t\" (UID: \"08c34b72-06a7-4a73-b703-ce61dbf3a37f\") " pod="kube-system/kindnet-k7m9t"
	Oct 18 09:15:37 no-preload-031066 kubelet[2284]: I1018 09:15:37.544890    2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/08c34b72-06a7-4a73-b703-ce61dbf3a37f-xtables-lock\") pod \"kindnet-k7m9t\" (UID: \"08c34b72-06a7-4a73-b703-ce61dbf3a37f\") " pod="kube-system/kindnet-k7m9t"
	Oct 18 09:15:37 no-preload-031066 kubelet[2284]: I1018 09:15:37.544912    2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/08c34b72-06a7-4a73-b703-ce61dbf3a37f-lib-modules\") pod \"kindnet-k7m9t\" (UID: \"08c34b72-06a7-4a73-b703-ce61dbf3a37f\") " pod="kube-system/kindnet-k7m9t"
	Oct 18 09:15:39 no-preload-031066 kubelet[2284]: I1018 09:15:39.368251    2284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jr5qn" podStartSLOduration=2.368225712 podStartE2EDuration="2.368225712s" podCreationTimestamp="2025-10-18 09:15:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:15:38.38038334 +0000 UTC m=+6.152929756" watchObservedRunningTime="2025-10-18 09:15:39.368225712 +0000 UTC m=+7.140772129"
	Oct 18 09:15:40 no-preload-031066 kubelet[2284]: I1018 09:15:40.411135    2284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-k7m9t" podStartSLOduration=1.5445368 podStartE2EDuration="3.411112518s" podCreationTimestamp="2025-10-18 09:15:37 +0000 UTC" firstStartedPulling="2025-10-18 09:15:37.851445956 +0000 UTC m=+5.623992353" lastFinishedPulling="2025-10-18 09:15:39.718021662 +0000 UTC m=+7.490568071" observedRunningTime="2025-10-18 09:15:40.410751698 +0000 UTC m=+8.183298115" watchObservedRunningTime="2025-10-18 09:15:40.411112518 +0000 UTC m=+8.183658937"
	Oct 18 09:15:50 no-preload-031066 kubelet[2284]: I1018 09:15:50.583738    2284 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 18 09:15:50 no-preload-031066 kubelet[2284]: I1018 09:15:50.647619    2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5b3e8950-c8a2-4205-b3aa-5c48157fc9d1-tmp\") pod \"storage-provisioner\" (UID: \"5b3e8950-c8a2-4205-b3aa-5c48157fc9d1\") " pod="kube-system/storage-provisioner"
	Oct 18 09:15:50 no-preload-031066 kubelet[2284]: I1018 09:15:50.647693    2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h6fw\" (UniqueName: \"kubernetes.io/projected/5b3e8950-c8a2-4205-b3aa-5c48157fc9d1-kube-api-access-2h6fw\") pod \"storage-provisioner\" (UID: \"5b3e8950-c8a2-4205-b3aa-5c48157fc9d1\") " pod="kube-system/storage-provisioner"
	Oct 18 09:15:50 no-preload-031066 kubelet[2284]: I1018 09:15:50.647722    2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f9ac8bf-4d8f-489f-a5bb-f8ef2d832a89-config-volume\") pod \"coredns-66bc5c9577-h44wj\" (UID: \"0f9ac8bf-4d8f-489f-a5bb-f8ef2d832a89\") " pod="kube-system/coredns-66bc5c9577-h44wj"
	Oct 18 09:15:50 no-preload-031066 kubelet[2284]: I1018 09:15:50.647766    2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnx7x\" (UniqueName: \"kubernetes.io/projected/0f9ac8bf-4d8f-489f-a5bb-f8ef2d832a89-kube-api-access-dnx7x\") pod \"coredns-66bc5c9577-h44wj\" (UID: \"0f9ac8bf-4d8f-489f-a5bb-f8ef2d832a89\") " pod="kube-system/coredns-66bc5c9577-h44wj"
	Oct 18 09:15:51 no-preload-031066 kubelet[2284]: I1018 09:15:51.435522    2284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-h44wj" podStartSLOduration=14.435499134 podStartE2EDuration="14.435499134s" podCreationTimestamp="2025-10-18 09:15:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:15:51.418496405 +0000 UTC m=+19.191042822" watchObservedRunningTime="2025-10-18 09:15:51.435499134 +0000 UTC m=+19.208045545"
	Oct 18 09:15:53 no-preload-031066 kubelet[2284]: I1018 09:15:53.484553    2284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.484526357 podStartE2EDuration="15.484526357s" podCreationTimestamp="2025-10-18 09:15:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:15:51.449328832 +0000 UTC m=+19.221875250" watchObservedRunningTime="2025-10-18 09:15:53.484526357 +0000 UTC m=+21.257072778"
	Oct 18 09:15:53 no-preload-031066 kubelet[2284]: I1018 09:15:53.565517    2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgnl5\" (UniqueName: \"kubernetes.io/projected/f45fe433-dd70-4ef8-86fc-49c43f3e3c71-kube-api-access-xgnl5\") pod \"busybox\" (UID: \"f45fe433-dd70-4ef8-86fc-49c43f3e3c71\") " pod="default/busybox"
	Oct 18 09:15:55 no-preload-031066 kubelet[2284]: I1018 09:15:55.434635    2284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.71639817 podStartE2EDuration="2.434608253s" podCreationTimestamp="2025-10-18 09:15:53 +0000 UTC" firstStartedPulling="2025-10-18 09:15:53.810063261 +0000 UTC m=+21.582609657" lastFinishedPulling="2025-10-18 09:15:54.528273343 +0000 UTC m=+22.300819740" observedRunningTime="2025-10-18 09:15:55.434525057 +0000 UTC m=+23.207071488" watchObservedRunningTime="2025-10-18 09:15:55.434608253 +0000 UTC m=+23.207154670"
	
	
	==> storage-provisioner [235ee82ae0ba57e89324a9a879706493a489fa0f8a84203ac73b6d11e5260c64] <==
	I1018 09:15:50.985971       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 09:15:50.996895       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 09:15:50.996971       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 09:15:51.002098       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:15:51.011623       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 09:15:51.011929       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 09:15:51.012176       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-031066_a4533dcb-85fe-4c8a-a04c-560371f3e3c9!
	I1018 09:15:51.012741       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4d61b21d-ca88-4508-8d32-276d0fdbca79", APIVersion:"v1", ResourceVersion:"411", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-031066_a4533dcb-85fe-4c8a-a04c-560371f3e3c9 became leader
	W1018 09:15:51.015249       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:15:51.021786       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 09:15:51.112389       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-031066_a4533dcb-85fe-4c8a-a04c-560371f3e3c9!
	W1018 09:15:53.025518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:15:53.031191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:15:55.034087       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:15:55.038441       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:15:57.042387       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:15:57.046685       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:15:59.050285       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:15:59.055874       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:16:01.060041       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:16:01.065577       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-031066 -n no-preload-031066
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-031066 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-880603 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-880603 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (247.329872ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:16:51Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-880603 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-880603 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-880603 describe deploy/metrics-server -n kube-system: exit status 1 (63.338149ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-880603 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-880603
helpers_test.go:243: (dbg) docker inspect embed-certs-880603:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1b6bc4c9714c158ab43e5d4a65bb819978ef8e9057777261e47f0a5ac38b8a4e",
	        "Created": "2025-10-18T09:15:37.716133173Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 296383,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T09:15:37.773713443Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/1b6bc4c9714c158ab43e5d4a65bb819978ef8e9057777261e47f0a5ac38b8a4e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1b6bc4c9714c158ab43e5d4a65bb819978ef8e9057777261e47f0a5ac38b8a4e/hostname",
	        "HostsPath": "/var/lib/docker/containers/1b6bc4c9714c158ab43e5d4a65bb819978ef8e9057777261e47f0a5ac38b8a4e/hosts",
	        "LogPath": "/var/lib/docker/containers/1b6bc4c9714c158ab43e5d4a65bb819978ef8e9057777261e47f0a5ac38b8a4e/1b6bc4c9714c158ab43e5d4a65bb819978ef8e9057777261e47f0a5ac38b8a4e-json.log",
	        "Name": "/embed-certs-880603",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-880603:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-880603",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1b6bc4c9714c158ab43e5d4a65bb819978ef8e9057777261e47f0a5ac38b8a4e",
	                "LowerDir": "/var/lib/docker/overlay2/9fc765f8ac24538015479acdd40b3d737a7cedebc070de1fc4ec4d150a46823c-init/diff:/var/lib/docker/overlay2/76f783f469ac4c930bc111d7df4bd2b3a57bdcd762971c7ce0ba7a7b959771a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9fc765f8ac24538015479acdd40b3d737a7cedebc070de1fc4ec4d150a46823c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9fc765f8ac24538015479acdd40b3d737a7cedebc070de1fc4ec4d150a46823c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9fc765f8ac24538015479acdd40b3d737a7cedebc070de1fc4ec4d150a46823c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-880603",
	                "Source": "/var/lib/docker/volumes/embed-certs-880603/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-880603",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-880603",
	                "name.minikube.sigs.k8s.io": "embed-certs-880603",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "77d1579e51bd868f54169a55ecea8db863c3e5e4dd96d52d1aab8bcc52d5dae9",
	            "SandboxKey": "/var/run/docker/netns/77d1579e51bd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-880603": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "22:cf:7c:b2:0c:1b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "00da72598f1f33e65a58d1743a0dfc899ddee3ad08c7f711e26bf3f40d92300d",
	                    "EndpointID": "31cda685c54ac24fe9ce8796c30fd60eb7cceb9256e27fa52c9e3ec75084c812",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-880603",
	                        "1b6bc4c9714c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-880603 -n embed-certs-880603
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-880603 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-880603 logs -n 25: (1.056948173s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p enable-default-cni-448954 sudo cat /var/lib/kubelet/config.yaml                                                                                                       │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ ssh     │ -p enable-default-cni-448954 sudo systemctl status docker --all --full --no-pager                                                                                        │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │                     │
	│ ssh     │ -p enable-default-cni-448954 sudo systemctl cat docker --no-pager                                                                                                        │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ ssh     │ -p enable-default-cni-448954 sudo cat /etc/docker/daemon.json                                                                                                            │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │                     │
	│ ssh     │ -p enable-default-cni-448954 sudo docker system info                                                                                                                     │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │                     │
	│ ssh     │ -p enable-default-cni-448954 sudo systemctl status cri-docker --all --full --no-pager                                                                                    │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │                     │
	│ ssh     │ -p enable-default-cni-448954 sudo systemctl cat cri-docker --no-pager                                                                                                    │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ ssh     │ -p enable-default-cni-448954 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                               │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │                     │
	│ ssh     │ -p enable-default-cni-448954 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                         │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ ssh     │ -p enable-default-cni-448954 sudo cri-dockerd --version                                                                                                                  │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ ssh     │ -p enable-default-cni-448954 sudo systemctl status containerd --all --full --no-pager                                                                                    │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │                     │
	│ ssh     │ -p enable-default-cni-448954 sudo systemctl cat containerd --no-pager                                                                                                    │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ ssh     │ -p enable-default-cni-448954 sudo cat /lib/systemd/system/containerd.service                                                                                             │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ ssh     │ -p enable-default-cni-448954 sudo cat /etc/containerd/config.toml                                                                                                        │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ ssh     │ -p enable-default-cni-448954 sudo containerd config dump                                                                                                                 │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ ssh     │ -p enable-default-cni-448954 sudo systemctl status crio --all --full --no-pager                                                                                          │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ ssh     │ -p enable-default-cni-448954 sudo systemctl cat crio --no-pager                                                                                                          │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ ssh     │ -p enable-default-cni-448954 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ ssh     │ -p enable-default-cni-448954 sudo crio config                                                                                                                            │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ delete  │ -p enable-default-cni-448954                                                                                                                                             │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ delete  │ -p disable-driver-mounts-634520                                                                                                                                          │ disable-driver-mounts-634520 │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ start   │ -p default-k8s-diff-port-986220 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-986220 │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-031066 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                             │ no-preload-031066            │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ start   │ -p no-preload-031066 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                  │ no-preload-031066            │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-880603 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-880603           │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 09:16:21
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 09:16:21.259556  309439 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:16:21.259842  309439 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:16:21.259853  309439 out.go:374] Setting ErrFile to fd 2...
	I1018 09:16:21.259859  309439 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:16:21.260111  309439 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-5897/.minikube/bin
	I1018 09:16:21.260632  309439 out.go:368] Setting JSON to false
	I1018 09:16:21.261865  309439 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3529,"bootTime":1760775452,"procs":325,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 09:16:21.261961  309439 start.go:141] virtualization: kvm guest
	I1018 09:16:21.264134  309439 out.go:179] * [no-preload-031066] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 09:16:21.265731  309439 out.go:179]   - MINIKUBE_LOCATION=21767
	I1018 09:16:21.265725  309439 notify.go:220] Checking for updates...
	I1018 09:16:21.268703  309439 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:16:21.270038  309439 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-5897/kubeconfig
	I1018 09:16:21.271373  309439 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-5897/.minikube
	I1018 09:16:21.272816  309439 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 09:16:21.274205  309439 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:16:21.275956  309439 config.go:182] Loaded profile config "no-preload-031066": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:16:21.276446  309439 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:16:21.302079  309439 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 09:16:21.302171  309439 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:16:21.363454  309439 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-18 09:16:21.352641655 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:16:21.363573  309439 docker.go:318] overlay module found
	I1018 09:16:21.365496  309439 out.go:179] * Using the docker driver based on existing profile
	I1018 09:16:21.366846  309439 start.go:305] selected driver: docker
	I1018 09:16:21.366860  309439 start.go:925] validating driver "docker" against &{Name:no-preload-031066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-031066 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:16:21.366946  309439 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:16:21.367537  309439 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:16:21.430714  309439 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-18 09:16:21.420288348 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:16:21.431045  309439 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:16:21.431076  309439 cni.go:84] Creating CNI manager for ""
	I1018 09:16:21.431123  309439 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:16:21.431162  309439 start.go:349] cluster config:
	{Name:no-preload-031066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-031066 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:16:21.433306  309439 out.go:179] * Starting "no-preload-031066" primary control-plane node in "no-preload-031066" cluster
	I1018 09:16:21.434506  309439 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 09:16:21.435855  309439 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 09:16:21.437073  309439 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:16:21.437171  309439 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 09:16:21.437215  309439 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/no-preload-031066/config.json ...
	I1018 09:16:21.437382  309439 cache.go:107] acquiring lock: {Name:mka90e9ba087577c518f2d2789ac53b5d3a7e763 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:16:21.437396  309439 cache.go:107] acquiring lock: {Name:mk6fc1dc569bbb33e36e89f8f90205f595f97590 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:16:21.437429  309439 cache.go:107] acquiring lock: {Name:mk862309f449c155bd44d2ad75f71086b6e84154 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:16:21.437488  309439 cache.go:107] acquiring lock: {Name:mkba01dbd7a5ffa26c612bd6d2ecfdfb06fab7f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:16:21.437517  309439 cache.go:115] /home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1018 09:16:21.437376  309439 cache.go:107] acquiring lock: {Name:mkd7da5cca5b2c7f5a7a2978ccb1f907bf4e999d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:16:21.437529  309439 cache.go:115] /home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1018 09:16:21.437531  309439 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 103.396µs
	I1018 09:16:21.437548  309439 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1018 09:16:21.437540  309439 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 155.136µs
	I1018 09:16:21.437551  309439 cache.go:115] /home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1018 09:16:21.437553  309439 cache.go:115] /home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1018 09:16:21.437556  309439 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1018 09:16:21.437519  309439 cache.go:115] /home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1018 09:16:21.437561  309439 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 199.486µs
	I1018 09:16:21.437564  309439 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 84.165µs
	I1018 09:16:21.437573  309439 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1018 09:16:21.437565  309439 cache.go:107] acquiring lock: {Name:mk207c5d06cdfbb02440711f0747e0524648cf15 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:16:21.437611  309439 cache.go:115] /home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1018 09:16:21.437627  309439 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 65.353µs
	I1018 09:16:21.437636  309439 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1018 09:16:21.437575  309439 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1018 09:16:21.437513  309439 cache.go:107] acquiring lock: {Name:mk4deb8933cd428b15e028b41c12d1c1d0a4c5a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:16:21.437573  309439 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 203.037µs
	I1018 09:16:21.437696  309439 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1018 09:16:21.437553  309439 cache.go:107] acquiring lock: {Name:mkeb58e0ef10b1fdccc29a88361956d4cde72da3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:16:21.437730  309439 cache.go:115] /home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1018 09:16:21.437741  309439 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 240.836µs
	I1018 09:16:21.437753  309439 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1018 09:16:21.437671  309439 cache.go:115] /home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1018 09:16:21.437763  309439 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 319.038µs
	I1018 09:16:21.437774  309439 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1018 09:16:21.437787  309439 cache.go:87] Successfully saved all images to host disk.
	I1018 09:16:21.460092  309439 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 09:16:21.460113  309439 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 09:16:21.460128  309439 cache.go:232] Successfully downloaded all kic artifacts
	I1018 09:16:21.460160  309439 start.go:360] acquireMachinesLock for no-preload-031066: {Name:mkf2aade90157f4c0d311140fc5fc0e3e0428507 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:16:21.460220  309439 start.go:364] duration metric: took 39.29µs to acquireMachinesLock for "no-preload-031066"
	I1018 09:16:21.460239  309439 start.go:96] Skipping create...Using existing machine configuration
	I1018 09:16:21.460249  309439 fix.go:54] fixHost starting: 
	I1018 09:16:21.460515  309439 cli_runner.go:164] Run: docker container inspect no-preload-031066 --format={{.State.Status}}
	I1018 09:16:21.479263  309439 fix.go:112] recreateIfNeeded on no-preload-031066: state=Stopped err=<nil>
	W1018 09:16:21.479306  309439 fix.go:138] unexpected machine state, will restart: <nil>
	W1018 09:16:18.612194  295389 node_ready.go:57] node "embed-certs-880603" has "Ready":"False" status (will retry)
	W1018 09:16:21.111155  295389 node_ready.go:57] node "embed-certs-880603" has "Ready":"False" status (will retry)
	W1018 09:16:19.794473  302609 pod_ready.go:104] pod "coredns-5dd5756b68-gwttp" is not "Ready", error: <nil>
	W1018 09:16:22.294004  302609 pod_ready.go:104] pod "coredns-5dd5756b68-gwttp" is not "Ready", error: <nil>
	I1018 09:16:19.783671  307829 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-5897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-986220:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.490229561s)
	I1018 09:16:19.783707  307829 kic.go:203] duration metric: took 4.490410558s to extract preloaded images to volume ...
	W1018 09:16:19.783815  307829 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1018 09:16:19.783854  307829 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1018 09:16:19.783901  307829 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 09:16:19.847832  307829 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-986220 --name default-k8s-diff-port-986220 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-986220 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-986220 --network default-k8s-diff-port-986220 --ip 192.168.94.2 --volume default-k8s-diff-port-986220:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 09:16:20.166578  307829 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-986220 --format={{.State.Running}}
	I1018 09:16:20.186662  307829 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-986220 --format={{.State.Status}}
	I1018 09:16:20.206875  307829 cli_runner.go:164] Run: docker exec default-k8s-diff-port-986220 stat /var/lib/dpkg/alternatives/iptables
	I1018 09:16:20.258252  307829 oci.go:144] the created container "default-k8s-diff-port-986220" has a running status.
	I1018 09:16:20.258285  307829 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21767-5897/.minikube/machines/default-k8s-diff-port-986220/id_rsa...
	I1018 09:16:20.304155  307829 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21767-5897/.minikube/machines/default-k8s-diff-port-986220/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 09:16:20.339663  307829 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-986220 --format={{.State.Status}}
	I1018 09:16:20.359254  307829 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 09:16:20.359276  307829 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-986220 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 09:16:20.402369  307829 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-986220 --format={{.State.Status}}
	I1018 09:16:20.428033  307829 machine.go:93] provisionDockerMachine start ...
	I1018 09:16:20.428144  307829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-986220
	I1018 09:16:20.449570  307829 main.go:141] libmachine: Using SSH client type: native
	I1018 09:16:20.449929  307829 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1018 09:16:20.449948  307829 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:16:20.450769  307829 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36384->127.0.0.1:33108: read: connection reset by peer
	I1018 09:16:23.589648  307829 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-986220
	
	I1018 09:16:23.589683  307829 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-986220"
	I1018 09:16:23.589753  307829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-986220
	I1018 09:16:23.609951  307829 main.go:141] libmachine: Using SSH client type: native
	I1018 09:16:23.610242  307829 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1018 09:16:23.610262  307829 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-986220 && echo "default-k8s-diff-port-986220" | sudo tee /etc/hostname
	I1018 09:16:23.757907  307829 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-986220
	
	I1018 09:16:23.757979  307829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-986220
	I1018 09:16:23.777613  307829 main.go:141] libmachine: Using SSH client type: native
	I1018 09:16:23.777861  307829 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1018 09:16:23.777889  307829 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-986220' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-986220/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-986220' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:16:23.916520  307829 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:16:23.916547  307829 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-5897/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-5897/.minikube}
	I1018 09:16:23.916591  307829 ubuntu.go:190] setting up certificates
	I1018 09:16:23.916606  307829 provision.go:84] configureAuth start
	I1018 09:16:23.916674  307829 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-986220
	I1018 09:16:23.935731  307829 provision.go:143] copyHostCerts
	I1018 09:16:23.935809  307829 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem, removing ...
	I1018 09:16:23.935828  307829 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem
	I1018 09:16:23.935910  307829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem (1078 bytes)
	I1018 09:16:23.936072  307829 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem, removing ...
	I1018 09:16:23.936088  307829 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem
	I1018 09:16:23.936136  307829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem (1123 bytes)
	I1018 09:16:23.936218  307829 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem, removing ...
	I1018 09:16:23.936228  307829 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem
	I1018 09:16:23.936286  307829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem (1675 bytes)
	I1018 09:16:23.936407  307829 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-986220 san=[127.0.0.1 192.168.94.2 default-k8s-diff-port-986220 localhost minikube]
	I1018 09:16:24.096815  307829 provision.go:177] copyRemoteCerts
	I1018 09:16:24.096879  307829 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:16:24.096916  307829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-986220
	I1018 09:16:24.116412  307829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/default-k8s-diff-port-986220/id_rsa Username:docker}
	I1018 09:16:24.215442  307829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:16:24.236994  307829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1018 09:16:24.256007  307829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 09:16:24.275068  307829 provision.go:87] duration metric: took 358.446736ms to configureAuth
	I1018 09:16:24.275096  307829 ubuntu.go:206] setting minikube options for container-runtime
	I1018 09:16:24.275276  307829 config.go:182] Loaded profile config "default-k8s-diff-port-986220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:16:24.275405  307829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-986220
	I1018 09:16:24.295823  307829 main.go:141] libmachine: Using SSH client type: native
	I1018 09:16:24.296078  307829 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1018 09:16:24.296097  307829 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:16:24.553053  307829 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:16:24.553082  307829 machine.go:96] duration metric: took 4.125023459s to provisionDockerMachine
	I1018 09:16:24.553094  307829 client.go:171] duration metric: took 9.862444073s to LocalClient.Create
	I1018 09:16:24.553114  307829 start.go:167] duration metric: took 9.862511631s to libmachine.API.Create "default-k8s-diff-port-986220"
	I1018 09:16:24.553124  307829 start.go:293] postStartSetup for "default-k8s-diff-port-986220" (driver="docker")
	I1018 09:16:24.553138  307829 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:16:24.553242  307829 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:16:24.553291  307829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-986220
	I1018 09:16:24.572128  307829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/default-k8s-diff-port-986220/id_rsa Username:docker}
	I1018 09:16:24.672893  307829 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:16:24.676680  307829 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 09:16:24.676709  307829 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 09:16:24.676719  307829 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-5897/.minikube/addons for local assets ...
	I1018 09:16:24.676777  307829 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-5897/.minikube/files for local assets ...
	I1018 09:16:24.676867  307829 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem -> 93942.pem in /etc/ssl/certs
	I1018 09:16:24.676983  307829 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:16:24.686464  307829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem --> /etc/ssl/certs/93942.pem (1708 bytes)
	I1018 09:16:24.708946  307829 start.go:296] duration metric: took 155.806152ms for postStartSetup
	I1018 09:16:24.709434  307829 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-986220
	I1018 09:16:24.729672  307829 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/config.json ...
	I1018 09:16:24.729981  307829 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:16:24.730033  307829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-986220
	I1018 09:16:24.749138  307829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/default-k8s-diff-port-986220/id_rsa Username:docker}
	I1018 09:16:24.846031  307829 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 09:16:24.851592  307829 start.go:128] duration metric: took 10.163744383s to createHost
	I1018 09:16:24.851619  307829 start.go:83] releasing machines lock for "default-k8s-diff-port-986220", held for 10.163895422s
	I1018 09:16:24.851680  307829 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-986220
	I1018 09:16:24.871446  307829 ssh_runner.go:195] Run: cat /version.json
	I1018 09:16:24.871492  307829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-986220
	I1018 09:16:24.871527  307829 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:16:24.871607  307829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-986220
	I1018 09:16:24.892448  307829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/default-k8s-diff-port-986220/id_rsa Username:docker}
	I1018 09:16:24.892466  307829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/default-k8s-diff-port-986220/id_rsa Username:docker}
	I1018 09:16:25.047556  307829 ssh_runner.go:195] Run: systemctl --version
	I1018 09:16:25.056042  307829 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:16:25.095154  307829 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:16:25.100317  307829 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:16:25.100404  307829 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:16:25.135472  307829 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1018 09:16:25.135500  307829 start.go:495] detecting cgroup driver to use...
	I1018 09:16:25.135533  307829 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 09:16:25.135579  307829 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:16:25.163992  307829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:16:25.179086  307829 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:16:25.179151  307829 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:16:25.197806  307829 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:16:25.218805  307829 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:16:25.310534  307829 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:16:25.402675  307829 docker.go:234] disabling docker service ...
	I1018 09:16:25.402736  307829 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:16:25.424774  307829 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:16:25.439087  307829 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:16:25.533380  307829 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:16:25.620820  307829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:16:25.636909  307829 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:16:25.654401  307829 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 09:16:25.654463  307829 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:16:25.667479  307829 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 09:16:25.667553  307829 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:16:25.678806  307829 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:16:25.692980  307829 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:16:25.704763  307829 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:16:25.715821  307829 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:16:25.727218  307829 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:16:25.742569  307829 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:16:25.752002  307829 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:16:25.760372  307829 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:16:25.768535  307829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:16:25.856991  307829 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:16:25.969026  307829 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:16:25.969096  307829 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:16:25.974137  307829 start.go:563] Will wait 60s for crictl version
	I1018 09:16:25.974200  307829 ssh_runner.go:195] Run: which crictl
	I1018 09:16:25.978663  307829 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 09:16:26.006946  307829 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 09:16:26.007028  307829 ssh_runner.go:195] Run: crio --version
	I1018 09:16:26.037634  307829 ssh_runner.go:195] Run: crio --version
	I1018 09:16:26.069278  307829 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 09:16:21.481637  309439 out.go:252] * Restarting existing docker container for "no-preload-031066" ...
	I1018 09:16:21.481720  309439 cli_runner.go:164] Run: docker start no-preload-031066
	I1018 09:16:21.732544  309439 cli_runner.go:164] Run: docker container inspect no-preload-031066 --format={{.State.Status}}
	I1018 09:16:21.752925  309439 kic.go:430] container "no-preload-031066" state is running.
	I1018 09:16:21.753416  309439 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-031066
	I1018 09:16:21.774132  309439 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/no-preload-031066/config.json ...
	I1018 09:16:21.774479  309439 machine.go:93] provisionDockerMachine start ...
	I1018 09:16:21.774570  309439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-031066
	I1018 09:16:21.795137  309439 main.go:141] libmachine: Using SSH client type: native
	I1018 09:16:21.795458  309439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1018 09:16:21.795477  309439 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:16:21.796069  309439 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52300->127.0.0.1:33113: read: connection reset by peer
	I1018 09:16:24.935395  309439 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-031066
	
	I1018 09:16:24.935424  309439 ubuntu.go:182] provisioning hostname "no-preload-031066"
	I1018 09:16:24.935491  309439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-031066
	I1018 09:16:24.955546  309439 main.go:141] libmachine: Using SSH client type: native
	I1018 09:16:24.955764  309439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1018 09:16:24.955779  309439 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-031066 && echo "no-preload-031066" | sudo tee /etc/hostname
	I1018 09:16:25.103825  309439 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-031066
	
	I1018 09:16:25.103917  309439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-031066
	I1018 09:16:25.127296  309439 main.go:141] libmachine: Using SSH client type: native
	I1018 09:16:25.127611  309439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1018 09:16:25.127652  309439 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-031066' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-031066/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-031066' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:16:25.274198  309439 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:16:25.274224  309439 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-5897/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-5897/.minikube}
	I1018 09:16:25.274267  309439 ubuntu.go:190] setting up certificates
	I1018 09:16:25.274280  309439 provision.go:84] configureAuth start
	I1018 09:16:25.274327  309439 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-031066
	I1018 09:16:25.295152  309439 provision.go:143] copyHostCerts
	I1018 09:16:25.295209  309439 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem, removing ...
	I1018 09:16:25.295222  309439 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem
	I1018 09:16:25.295281  309439 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem (1078 bytes)
	I1018 09:16:25.295411  309439 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem, removing ...
	I1018 09:16:25.295423  309439 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem
	I1018 09:16:25.295448  309439 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem (1123 bytes)
	I1018 09:16:25.295525  309439 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem, removing ...
	I1018 09:16:25.295533  309439 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem
	I1018 09:16:25.295554  309439 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem (1675 bytes)
	I1018 09:16:25.295606  309439 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem org=jenkins.no-preload-031066 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-031066]
	I1018 09:16:25.425118  309439 provision.go:177] copyRemoteCerts
	I1018 09:16:25.425176  309439 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:16:25.425241  309439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-031066
	I1018 09:16:25.445036  309439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/no-preload-031066/id_rsa Username:docker}
	I1018 09:16:25.543837  309439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:16:25.565616  309439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 09:16:25.589029  309439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 09:16:25.608484  309439 provision.go:87] duration metric: took 334.191405ms to configureAuth
	I1018 09:16:25.608516  309439 ubuntu.go:206] setting minikube options for container-runtime
	I1018 09:16:25.608733  309439 config.go:182] Loaded profile config "no-preload-031066": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:16:25.608856  309439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-031066
	I1018 09:16:25.632064  309439 main.go:141] libmachine: Using SSH client type: native
	I1018 09:16:25.632401  309439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1018 09:16:25.632427  309439 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:16:25.957864  309439 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:16:25.957894  309439 machine.go:96] duration metric: took 4.183393935s to provisionDockerMachine
	I1018 09:16:25.957909  309439 start.go:293] postStartSetup for "no-preload-031066" (driver="docker")
	I1018 09:16:25.957922  309439 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:16:25.957977  309439 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:16:25.958020  309439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-031066
	I1018 09:16:25.980314  309439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/no-preload-031066/id_rsa Username:docker}
	I1018 09:16:26.082603  309439 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:16:26.086751  309439 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 09:16:26.086778  309439 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 09:16:26.086789  309439 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-5897/.minikube/addons for local assets ...
	I1018 09:16:26.086848  309439 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-5897/.minikube/files for local assets ...
	I1018 09:16:26.086937  309439 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem -> 93942.pem in /etc/ssl/certs
	I1018 09:16:26.087048  309439 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:16:26.096192  309439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem --> /etc/ssl/certs/93942.pem (1708 bytes)
	I1018 09:16:26.115776  309439 start.go:296] duration metric: took 157.850809ms for postStartSetup
	I1018 09:16:26.115859  309439 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:16:26.115914  309439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-031066
	I1018 09:16:26.137971  309439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/no-preload-031066/id_rsa Username:docker}
	I1018 09:16:26.234585  309439 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 09:16:26.239791  309439 fix.go:56] duration metric: took 4.779536543s for fixHost
	I1018 09:16:26.239820  309439 start.go:83] releasing machines lock for "no-preload-031066", held for 4.779588591s
	I1018 09:16:26.239895  309439 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-031066
	I1018 09:16:26.259555  309439 ssh_runner.go:195] Run: cat /version.json
	W1018 09:16:23.111428  295389 node_ready.go:57] node "embed-certs-880603" has "Ready":"False" status (will retry)
	W1018 09:16:25.112093  295389 node_ready.go:57] node "embed-certs-880603" has "Ready":"False" status (will retry)
	I1018 09:16:26.259669  309439 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:16:26.259627  309439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-031066
	I1018 09:16:26.259792  309439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-031066
	I1018 09:16:26.281760  309439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/no-preload-031066/id_rsa Username:docker}
	I1018 09:16:26.281753  309439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/no-preload-031066/id_rsa Username:docker}
	I1018 09:16:26.435671  309439 ssh_runner.go:195] Run: systemctl --version
	I1018 09:16:26.443263  309439 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:16:26.486908  309439 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:16:26.492101  309439 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:16:26.492171  309439 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:16:26.501157  309439 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 09:16:26.501179  309439 start.go:495] detecting cgroup driver to use...
	I1018 09:16:26.501207  309439 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 09:16:26.501261  309439 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:16:26.517601  309439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:16:26.535073  309439 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:16:26.535137  309439 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:16:26.559014  309439 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:16:26.573192  309439 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:16:26.664628  309439 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:16:26.753369  309439 docker.go:234] disabling docker service ...
	I1018 09:16:26.753441  309439 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:16:26.769930  309439 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:16:26.784250  309439 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:16:26.875825  309439 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:16:26.963494  309439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:16:26.977661  309439 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:16:26.995292  309439 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 09:16:26.995366  309439 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:16:27.005257  309439 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 09:16:27.005335  309439 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:16:27.015687  309439 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:16:27.026502  309439 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:16:27.037104  309439 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:16:27.046231  309439 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:16:27.056592  309439 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:16:27.066210  309439 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:16:27.076520  309439 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:16:27.086299  309439 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:16:27.099809  309439 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:16:27.202216  309439 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:16:27.320390  309439 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:16:27.320456  309439 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:16:27.325144  309439 start.go:563] Will wait 60s for crictl version
	I1018 09:16:27.325213  309439 ssh_runner.go:195] Run: which crictl
	I1018 09:16:27.329944  309439 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 09:16:27.360229  309439 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 09:16:27.360337  309439 ssh_runner.go:195] Run: crio --version
	I1018 09:16:27.392843  309439 ssh_runner.go:195] Run: crio --version
	I1018 09:16:27.430211  309439 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 09:16:26.070774  307829 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-986220 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:16:26.091069  307829 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1018 09:16:26.095294  307829 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:16:26.106817  307829 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-986220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-986220 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:16:26.106953  307829 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:16:26.107001  307829 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:16:26.146050  307829 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:16:26.146071  307829 crio.go:433] Images already preloaded, skipping extraction
	I1018 09:16:26.146117  307829 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:16:26.175872  307829 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:16:26.175899  307829 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:16:26.175908  307829 kubeadm.go:934] updating node { 192.168.94.2 8444 v1.34.1 crio true true} ...
	I1018 09:16:26.176038  307829 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-986220 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-986220 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:16:26.176148  307829 ssh_runner.go:195] Run: crio config
	I1018 09:16:26.227370  307829 cni.go:84] Creating CNI manager for ""
	I1018 09:16:26.227396  307829 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:16:26.227416  307829 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 09:16:26.227445  307829 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-986220 NodeName:default-k8s-diff-port-986220 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:16:26.227594  307829 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-986220"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:16:26.227669  307829 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 09:16:26.236922  307829 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:16:26.236985  307829 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:16:26.246249  307829 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1018 09:16:26.261061  307829 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:16:26.281576  307829 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1018 09:16:26.296929  307829 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1018 09:16:26.300975  307829 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:16:26.313470  307829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:16:26.402470  307829 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:16:26.432058  307829 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220 for IP: 192.168.94.2
	I1018 09:16:26.432089  307829 certs.go:195] generating shared ca certs ...
	I1018 09:16:26.432109  307829 certs.go:227] acquiring lock for ca certs: {Name:mk550b60d986fbbdf7b5e0015c56234b739f3162 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:16:26.432273  307829 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-5897/.minikube/ca.key
	I1018 09:16:26.432354  307829 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.key
	I1018 09:16:26.432374  307829 certs.go:257] generating profile certs ...
	I1018 09:16:26.432456  307829 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/client.key
	I1018 09:16:26.432479  307829 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/client.crt with IP's: []
	I1018 09:16:26.858948  307829 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/client.crt ...
	I1018 09:16:26.858974  307829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/client.crt: {Name:mk51c8869bcfadfee754b4430b46c6f8826cd48e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:16:26.859138  307829 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/client.key ...
	I1018 09:16:26.859151  307829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/client.key: {Name:mk25866fa200b9b02b356bf6c37bf61a8173ffbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:16:26.859263  307829 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/apiserver.key.6dd2aec8
	I1018 09:16:26.859285  307829 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/apiserver.crt.6dd2aec8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1018 09:16:27.395262  307829 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/apiserver.crt.6dd2aec8 ...
	I1018 09:16:27.395288  307829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/apiserver.crt.6dd2aec8: {Name:mk6e21b854f39a72826bd85be5ec5fc298b199fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:16:27.395475  307829 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/apiserver.key.6dd2aec8 ...
	I1018 09:16:27.395491  307829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/apiserver.key.6dd2aec8: {Name:mk0894105faa3c087ffd9c9fdc31379b6526b690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:16:27.395577  307829 certs.go:382] copying /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/apiserver.crt.6dd2aec8 -> /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/apiserver.crt
	I1018 09:16:27.395651  307829 certs.go:386] copying /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/apiserver.key.6dd2aec8 -> /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/apiserver.key
	I1018 09:16:27.395705  307829 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/proxy-client.key
	I1018 09:16:27.395722  307829 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/proxy-client.crt with IP's: []
	I1018 09:16:27.602598  307829 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/proxy-client.crt ...
	I1018 09:16:27.602624  307829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/proxy-client.crt: {Name:mk8306903932dd1bb11b8ea9409214667367047c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:16:27.602816  307829 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/proxy-client.key ...
	I1018 09:16:27.602835  307829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/proxy-client.key: {Name:mkd19044a8fb32eff2e080ea7a1555b5849cc3b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:16:27.603059  307829 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/9394.pem (1338 bytes)
	W1018 09:16:27.603102  307829 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-5897/.minikube/certs/9394_empty.pem, impossibly tiny 0 bytes
	I1018 09:16:27.603119  307829 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 09:16:27.603157  307829 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:16:27.603187  307829 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:16:27.603220  307829 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem (1675 bytes)
	I1018 09:16:27.603272  307829 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem (1708 bytes)
	I1018 09:16:27.603874  307829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:16:27.624224  307829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 09:16:27.643202  307829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:16:27.667310  307829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 09:16:27.687198  307829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1018 09:16:27.706826  307829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 09:16:27.726322  307829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:16:27.745966  307829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 09:16:27.766635  307829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:16:27.789853  307829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/certs/9394.pem --> /usr/share/ca-certificates/9394.pem (1338 bytes)
	I1018 09:16:27.812727  307829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem --> /usr/share/ca-certificates/93942.pem (1708 bytes)
	I1018 09:16:27.839899  307829 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:16:27.854585  307829 ssh_runner.go:195] Run: openssl version
	I1018 09:16:27.862043  307829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:16:27.871459  307829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:16:27.875984  307829 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:16:27.876057  307829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:16:27.912565  307829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:16:27.923183  307829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9394.pem && ln -fs /usr/share/ca-certificates/9394.pem /etc/ssl/certs/9394.pem"
	I1018 09:16:27.932933  307829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9394.pem
	I1018 09:16:27.937886  307829 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 08:35 /usr/share/ca-certificates/9394.pem
	I1018 09:16:27.937949  307829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9394.pem
	I1018 09:16:27.977776  307829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9394.pem /etc/ssl/certs/51391683.0"
	I1018 09:16:27.988396  307829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93942.pem && ln -fs /usr/share/ca-certificates/93942.pem /etc/ssl/certs/93942.pem"
	I1018 09:16:27.997972  307829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93942.pem
	I1018 09:16:28.002236  307829 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 08:35 /usr/share/ca-certificates/93942.pem
	I1018 09:16:28.002294  307829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93942.pem
	I1018 09:16:28.040608  307829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93942.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:16:28.051449  307829 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:16:28.055923  307829 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 09:16:28.055980  307829 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-986220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-986220 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:16:28.056051  307829 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:16:28.056119  307829 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:16:28.089132  307829 cri.go:89] found id: ""
	I1018 09:16:28.089192  307829 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:16:28.099177  307829 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 09:16:28.109267  307829 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 09:16:28.109329  307829 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 09:16:28.120642  307829 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 09:16:28.120666  307829 kubeadm.go:157] found existing configuration files:
	
	I1018 09:16:28.120718  307829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1018 09:16:28.131668  307829 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 09:16:28.131734  307829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 09:16:28.142231  307829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1018 09:16:28.155016  307829 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 09:16:28.155078  307829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 09:16:28.166186  307829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1018 09:16:28.177468  307829 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 09:16:28.177540  307829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 09:16:28.189027  307829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1018 09:16:28.199965  307829 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 09:16:28.200051  307829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 09:16:28.209045  307829 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 09:16:28.261581  307829 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 09:16:28.261670  307829 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 09:16:28.299228  307829 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 09:16:28.299358  307829 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1018 09:16:28.299410  307829 kubeadm.go:318] OS: Linux
	I1018 09:16:28.299478  307829 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 09:16:28.299612  307829 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 09:16:28.299657  307829 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 09:16:28.299700  307829 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 09:16:28.299742  307829 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 09:16:28.299787  307829 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 09:16:28.299829  307829 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 09:16:28.299868  307829 kubeadm.go:318] CGROUPS_IO: enabled
	I1018 09:16:28.395707  307829 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 09:16:28.395841  307829 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 09:16:28.395964  307829 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 09:16:28.413235  307829 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1018 09:16:24.295019  302609 pod_ready.go:104] pod "coredns-5dd5756b68-gwttp" is not "Ready", error: <nil>
	W1018 09:16:26.793721  302609 pod_ready.go:104] pod "coredns-5dd5756b68-gwttp" is not "Ready", error: <nil>
	I1018 09:16:27.431467  309439 cli_runner.go:164] Run: docker network inspect no-preload-031066 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:16:27.452092  309439 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1018 09:16:27.456746  309439 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:16:27.467844  309439 kubeadm.go:883] updating cluster {Name:no-preload-031066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-031066 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:16:27.467966  309439 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:16:27.468011  309439 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:16:27.503028  309439 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:16:27.503054  309439 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:16:27.503062  309439 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1018 09:16:27.503150  309439 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-031066 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-031066 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:16:27.503211  309439 ssh_runner.go:195] Run: crio config
	I1018 09:16:27.551972  309439 cni.go:84] Creating CNI manager for ""
	I1018 09:16:27.552003  309439 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:16:27.552027  309439 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 09:16:27.552059  309439 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-031066 NodeName:no-preload-031066 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:16:27.552228  309439 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-031066"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:16:27.552303  309439 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 09:16:27.561492  309439 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:16:27.561566  309439 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:16:27.570137  309439 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1018 09:16:27.584540  309439 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:16:27.598223  309439 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1018 09:16:27.612505  309439 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1018 09:16:27.616378  309439 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:16:27.628297  309439 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:16:27.719096  309439 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:16:27.742152  309439 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/no-preload-031066 for IP: 192.168.85.2
	I1018 09:16:27.742182  309439 certs.go:195] generating shared ca certs ...
	I1018 09:16:27.742204  309439 certs.go:227] acquiring lock for ca certs: {Name:mk550b60d986fbbdf7b5e0015c56234b739f3162 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:16:27.742412  309439 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-5897/.minikube/ca.key
	I1018 09:16:27.742502  309439 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.key
	I1018 09:16:27.742521  309439 certs.go:257] generating profile certs ...
	I1018 09:16:27.742635  309439 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/no-preload-031066/client.key
	I1018 09:16:27.742703  309439 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/no-preload-031066/apiserver.key.5b17cd89
	I1018 09:16:27.742770  309439 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/no-preload-031066/proxy-client.key
	I1018 09:16:27.742919  309439 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/9394.pem (1338 bytes)
	W1018 09:16:27.742965  309439 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-5897/.minikube/certs/9394_empty.pem, impossibly tiny 0 bytes
	I1018 09:16:27.742982  309439 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 09:16:27.743018  309439 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:16:27.743053  309439 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:16:27.743084  309439 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem (1675 bytes)
	I1018 09:16:27.743146  309439 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem (1708 bytes)
	I1018 09:16:27.744065  309439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:16:27.766446  309439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 09:16:27.789662  309439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:16:27.810502  309439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 09:16:27.837044  309439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/no-preload-031066/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1018 09:16:27.858195  309439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/no-preload-031066/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 09:16:27.878029  309439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/no-preload-031066/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:16:27.898104  309439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/no-preload-031066/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 09:16:27.918370  309439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:16:27.938784  309439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/certs/9394.pem --> /usr/share/ca-certificates/9394.pem (1338 bytes)
	I1018 09:16:27.959384  309439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem --> /usr/share/ca-certificates/93942.pem (1708 bytes)
	I1018 09:16:27.979401  309439 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:16:27.994769  309439 ssh_runner.go:195] Run: openssl version
	I1018 09:16:28.001920  309439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93942.pem && ln -fs /usr/share/ca-certificates/93942.pem /etc/ssl/certs/93942.pem"
	I1018 09:16:28.011616  309439 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93942.pem
	I1018 09:16:28.015846  309439 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 08:35 /usr/share/ca-certificates/93942.pem
	I1018 09:16:28.015902  309439 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93942.pem
	I1018 09:16:28.055574  309439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93942.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:16:28.064720  309439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:16:28.074630  309439 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:16:28.079603  309439 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:16:28.079670  309439 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:16:28.127275  309439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:16:28.140151  309439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9394.pem && ln -fs /usr/share/ca-certificates/9394.pem /etc/ssl/certs/9394.pem"
	I1018 09:16:28.152584  309439 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9394.pem
	I1018 09:16:28.158002  309439 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 08:35 /usr/share/ca-certificates/9394.pem
	I1018 09:16:28.158067  309439 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9394.pem
	I1018 09:16:28.211197  309439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9394.pem /etc/ssl/certs/51391683.0"
	I1018 09:16:28.220083  309439 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:16:28.224791  309439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 09:16:28.278506  309439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 09:16:28.328610  309439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 09:16:28.392663  309439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 09:16:28.455567  309439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 09:16:28.519223  309439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 09:16:28.580698  309439 kubeadm.go:400] StartCluster: {Name:no-preload-031066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-031066 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:16:28.580833  309439 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:16:28.580901  309439 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:16:28.624088  309439 cri.go:89] found id: "153dd41ff60f495d247d4bd42054dd9255c2fe5ccbc173f31021152a50b30308"
	I1018 09:16:28.624113  309439 cri.go:89] found id: "b51a1224ef6b876bc35ce20f2366f94525e300e4432dff8348abbde915ade5af"
	I1018 09:16:28.624118  309439 cri.go:89] found id: "62682de07bbfeb0d0f0c6405121566236410d571314651f369c15f65938b548a"
	I1018 09:16:28.624125  309439 cri.go:89] found id: "db536597b2746191742cfa1b8df28f2fe3935b9a553d5543f993db2773c9f6a1"
	I1018 09:16:28.624129  309439 cri.go:89] found id: ""
	I1018 09:16:28.624177  309439 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 09:16:28.642550  309439 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:16:28Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:16:28.642622  309439 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:16:28.658418  309439 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 09:16:28.658440  309439 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 09:16:28.658714  309439 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 09:16:28.670518  309439 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 09:16:28.671730  309439 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-031066" does not appear in /home/jenkins/minikube-integration/21767-5897/kubeconfig
	I1018 09:16:28.672554  309439 kubeconfig.go:62] /home/jenkins/minikube-integration/21767-5897/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-031066" cluster setting kubeconfig missing "no-preload-031066" context setting]
	I1018 09:16:28.673952  309439 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/kubeconfig: {Name:mkbdf22e0f6c3f9f36e4cde3352b620d43ace448 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:16:28.676681  309439 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 09:16:28.689778  309439 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1018 09:16:28.689902  309439 kubeadm.go:601] duration metric: took 31.455758ms to restartPrimaryControlPlane
	I1018 09:16:28.689919  309439 kubeadm.go:402] duration metric: took 109.246641ms to StartCluster
	I1018 09:16:28.689940  309439 settings.go:142] acquiring lock: {Name:mk177870d6cf7000f95346d8b9c104ade730278a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:16:28.690009  309439 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-5897/kubeconfig
	I1018 09:16:28.692230  309439 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/kubeconfig: {Name:mkbdf22e0f6c3f9f36e4cde3352b620d43ace448 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:16:28.692547  309439 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:16:28.692792  309439 config.go:182] Loaded profile config "no-preload-031066": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:16:28.692794  309439 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 09:16:28.692955  309439 addons.go:69] Setting storage-provisioner=true in profile "no-preload-031066"
	I1018 09:16:28.692978  309439 addons.go:238] Setting addon storage-provisioner=true in "no-preload-031066"
	I1018 09:16:28.692975  309439 addons.go:69] Setting dashboard=true in profile "no-preload-031066"
	I1018 09:16:28.692996  309439 addons.go:69] Setting default-storageclass=true in profile "no-preload-031066"
	I1018 09:16:28.693007  309439 addons.go:238] Setting addon dashboard=true in "no-preload-031066"
	I1018 09:16:28.693015  309439 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-031066"
	W1018 09:16:28.693018  309439 addons.go:247] addon dashboard should already be in state true
	I1018 09:16:28.693055  309439 host.go:66] Checking if "no-preload-031066" exists ...
	I1018 09:16:28.693384  309439 cli_runner.go:164] Run: docker container inspect no-preload-031066 --format={{.State.Status}}
	W1018 09:16:28.692987  309439 addons.go:247] addon storage-provisioner should already be in state true
	I1018 09:16:28.693557  309439 host.go:66] Checking if "no-preload-031066" exists ...
	I1018 09:16:28.693612  309439 cli_runner.go:164] Run: docker container inspect no-preload-031066 --format={{.State.Status}}
	I1018 09:16:28.694012  309439 cli_runner.go:164] Run: docker container inspect no-preload-031066 --format={{.State.Status}}
	I1018 09:16:28.694776  309439 out.go:179] * Verifying Kubernetes components...
	I1018 09:16:28.696220  309439 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:16:28.724766  309439 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1018 09:16:28.726175  309439 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1018 09:16:28.727367  309439 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1018 09:16:28.727390  309439 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1018 09:16:28.727455  309439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-031066
	I1018 09:16:28.728591  309439 addons.go:238] Setting addon default-storageclass=true in "no-preload-031066"
	W1018 09:16:28.728613  309439 addons.go:247] addon default-storageclass should already be in state true
	I1018 09:16:28.728642  309439 host.go:66] Checking if "no-preload-031066" exists ...
	I1018 09:16:28.729157  309439 cli_runner.go:164] Run: docker container inspect no-preload-031066 --format={{.State.Status}}
	I1018 09:16:28.730548  309439 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:16:28.416330  307829 out.go:252]   - Generating certificates and keys ...
	I1018 09:16:28.416469  307829 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 09:16:28.416585  307829 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 09:16:28.961544  307829 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 09:16:29.130817  307829 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 09:16:28.733826  309439 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:16:28.733946  309439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 09:16:28.734145  309439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-031066
	I1018 09:16:28.758429  309439 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 09:16:28.758462  309439 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 09:16:28.758530  309439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-031066
	I1018 09:16:28.765380  309439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/no-preload-031066/id_rsa Username:docker}
	I1018 09:16:28.782447  309439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/no-preload-031066/id_rsa Username:docker}
	I1018 09:16:28.799019  309439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/no-preload-031066/id_rsa Username:docker}
	I1018 09:16:28.912215  309439 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:16:28.934701  309439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:16:28.935357  309439 node_ready.go:35] waiting up to 6m0s for node "no-preload-031066" to be "Ready" ...
	I1018 09:16:28.970747  309439 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1018 09:16:28.970915  309439 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1018 09:16:28.972898  309439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 09:16:29.005063  309439 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1018 09:16:29.005087  309439 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1018 09:16:29.060862  309439 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1018 09:16:29.060897  309439 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1018 09:16:29.081097  309439 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1018 09:16:29.081122  309439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1018 09:16:29.100999  309439 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1018 09:16:29.101045  309439 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1018 09:16:29.120688  309439 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1018 09:16:29.120720  309439 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1018 09:16:29.139590  309439 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1018 09:16:29.139620  309439 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1018 09:16:29.157828  309439 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1018 09:16:29.157857  309439 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1018 09:16:29.177540  309439 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 09:16:29.177566  309439 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1018 09:16:29.198120  309439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 09:16:30.672748  309439 node_ready.go:49] node "no-preload-031066" is "Ready"
	I1018 09:16:30.672787  309439 node_ready.go:38] duration metric: took 1.737388567s for node "no-preload-031066" to be "Ready" ...
	I1018 09:16:30.672804  309439 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:16:30.672858  309439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:16:31.507012  309439 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.572254383s)
	I1018 09:16:31.507099  309439 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.534180069s)
	I1018 09:16:31.507726  309439 api_server.go:72] duration metric: took 2.815152526s to wait for apiserver process to appear ...
	I1018 09:16:31.507747  309439 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:16:31.507767  309439 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:16:31.508258  309439 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.309493299s)
	I1018 09:16:31.510608  309439 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-031066 addons enable metrics-server
	
	I1018 09:16:31.515473  309439 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 09:16:31.515509  309439 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 09:16:31.521256  309439 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1018 09:16:27.113789  295389 node_ready.go:57] node "embed-certs-880603" has "Ready":"False" status (will retry)
	W1018 09:16:29.611877  295389 node_ready.go:57] node "embed-certs-880603" has "Ready":"False" status (will retry)
	W1018 09:16:28.804271  302609 pod_ready.go:104] pod "coredns-5dd5756b68-gwttp" is not "Ready", error: <nil>
	W1018 09:16:31.298052  302609 pod_ready.go:104] pod "coredns-5dd5756b68-gwttp" is not "Ready", error: <nil>
	I1018 09:16:29.615880  307829 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 09:16:29.662294  307829 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 09:16:30.234104  307829 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 09:16:30.234392  307829 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-986220 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1018 09:16:30.435950  307829 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 09:16:30.436322  307829 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-986220 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1018 09:16:30.721773  307829 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 09:16:31.077742  307829 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 09:16:31.728841  307829 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 09:16:31.729054  307829 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 09:16:32.282669  307829 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 09:16:32.757782  307829 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 09:16:33.241823  307829 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 09:16:33.509889  307829 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 09:16:33.955012  307829 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 09:16:33.955761  307829 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 09:16:33.959972  307829 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 09:16:33.961487  307829 out.go:252]   - Booting up control plane ...
	I1018 09:16:33.961586  307829 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 09:16:33.961682  307829 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 09:16:33.962289  307829 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 09:16:33.978521  307829 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 09:16:33.979073  307829 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 09:16:33.987745  307829 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 09:16:33.988059  307829 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 09:16:33.988143  307829 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 09:16:34.106714  307829 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 09:16:34.106869  307829 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 09:16:31.522773  309439 addons.go:514] duration metric: took 2.829985828s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1018 09:16:32.008497  309439 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:16:32.013652  309439 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1018 09:16:32.014899  309439 api_server.go:141] control plane version: v1.34.1
	I1018 09:16:32.014936  309439 api_server.go:131] duration metric: took 507.174967ms to wait for apiserver health ...
	I1018 09:16:32.014946  309439 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:16:32.018941  309439 system_pods.go:59] 8 kube-system pods found
	I1018 09:16:32.018978  309439 system_pods.go:61] "coredns-66bc5c9577-h44wj" [0f9ac8bf-4d8f-489f-a5bb-f8ef2d832a89] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:16:32.018993  309439 system_pods.go:61] "etcd-no-preload-031066" [46ee9eac-4087-442e-855b-50a8b65b06df] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 09:16:32.019001  309439 system_pods.go:61] "kindnet-k7m9t" [08c34b72-06a7-4a73-b703-ce61dbf3a37f] Running
	I1018 09:16:32.019011  309439 system_pods.go:61] "kube-apiserver-no-preload-031066" [7b20717e-d3b8-4f72-9c92-04c74b236964] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 09:16:32.019019  309439 system_pods.go:61] "kube-controller-manager-no-preload-031066" [e8145322-b25f-40ec-aa8f-39b64900226c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 09:16:32.019025  309439 system_pods.go:61] "kube-proxy-jr5qn" [1ae92f3f-9c07-4fb0-8334-549bfd4cac76] Running
	I1018 09:16:32.019033  309439 system_pods.go:61] "kube-scheduler-no-preload-031066" [2d6fcc42-b0a0-46d3-8eb1-7408eadd4dc6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 09:16:32.019047  309439 system_pods.go:61] "storage-provisioner" [5b3e8950-c8a2-4205-b3aa-5c48157fc9d1] Running
	I1018 09:16:32.019057  309439 system_pods.go:74] duration metric: took 4.103211ms to wait for pod list to return data ...
	I1018 09:16:32.019071  309439 default_sa.go:34] waiting for default service account to be created ...
	I1018 09:16:32.021957  309439 default_sa.go:45] found service account: "default"
	I1018 09:16:32.021981  309439 default_sa.go:55] duration metric: took 2.904005ms for default service account to be created ...
	I1018 09:16:32.021993  309439 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 09:16:32.025565  309439 system_pods.go:86] 8 kube-system pods found
	I1018 09:16:32.025597  309439 system_pods.go:89] "coredns-66bc5c9577-h44wj" [0f9ac8bf-4d8f-489f-a5bb-f8ef2d832a89] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:16:32.025607  309439 system_pods.go:89] "etcd-no-preload-031066" [46ee9eac-4087-442e-855b-50a8b65b06df] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 09:16:32.025615  309439 system_pods.go:89] "kindnet-k7m9t" [08c34b72-06a7-4a73-b703-ce61dbf3a37f] Running
	I1018 09:16:32.025624  309439 system_pods.go:89] "kube-apiserver-no-preload-031066" [7b20717e-d3b8-4f72-9c92-04c74b236964] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 09:16:32.025633  309439 system_pods.go:89] "kube-controller-manager-no-preload-031066" [e8145322-b25f-40ec-aa8f-39b64900226c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 09:16:32.025650  309439 system_pods.go:89] "kube-proxy-jr5qn" [1ae92f3f-9c07-4fb0-8334-549bfd4cac76] Running
	I1018 09:16:32.025660  309439 system_pods.go:89] "kube-scheduler-no-preload-031066" [2d6fcc42-b0a0-46d3-8eb1-7408eadd4dc6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 09:16:32.025674  309439 system_pods.go:89] "storage-provisioner" [5b3e8950-c8a2-4205-b3aa-5c48157fc9d1] Running
	I1018 09:16:32.025689  309439 system_pods.go:126] duration metric: took 3.68704ms to wait for k8s-apps to be running ...
	I1018 09:16:32.025702  309439 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 09:16:32.025758  309439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:16:32.043072  309439 system_svc.go:56] duration metric: took 17.35931ms WaitForService to wait for kubelet
	I1018 09:16:32.043109  309439 kubeadm.go:586] duration metric: took 3.350536737s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:16:32.043130  309439 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:16:32.047292  309439 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 09:16:32.047324  309439 node_conditions.go:123] node cpu capacity is 8
	I1018 09:16:32.047338  309439 node_conditions.go:105] duration metric: took 4.202003ms to run NodePressure ...
	I1018 09:16:32.047376  309439 start.go:241] waiting for startup goroutines ...
	I1018 09:16:32.047386  309439 start.go:246] waiting for cluster config update ...
	I1018 09:16:32.047405  309439 start.go:255] writing updated cluster config ...
	I1018 09:16:32.047889  309439 ssh_runner.go:195] Run: rm -f paused
	I1018 09:16:32.053052  309439 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:16:32.058191  309439 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-h44wj" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 09:16:34.063715  309439 pod_ready.go:104] pod "coredns-66bc5c9577-h44wj" is not "Ready", error: <nil>
	W1018 09:16:36.065474  309439 pod_ready.go:104] pod "coredns-66bc5c9577-h44wj" is not "Ready", error: <nil>
	W1018 09:16:32.111388  295389 node_ready.go:57] node "embed-certs-880603" has "Ready":"False" status (will retry)
	W1018 09:16:34.111683  295389 node_ready.go:57] node "embed-certs-880603" has "Ready":"False" status (will retry)
	W1018 09:16:36.112976  295389 node_ready.go:57] node "embed-certs-880603" has "Ready":"False" status (will retry)
	W1018 09:16:33.795029  302609 pod_ready.go:104] pod "coredns-5dd5756b68-gwttp" is not "Ready", error: <nil>
	W1018 09:16:36.295172  302609 pod_ready.go:104] pod "coredns-5dd5756b68-gwttp" is not "Ready", error: <nil>
	W1018 09:16:38.295449  302609 pod_ready.go:104] pod "coredns-5dd5756b68-gwttp" is not "Ready", error: <nil>
	I1018 09:16:35.108493  307829 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.002011819s
	I1018 09:16:35.113205  307829 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 09:16:35.113369  307829 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8444/livez
	I1018 09:16:35.113508  307829 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 09:16:35.113618  307829 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 09:16:37.315142  307829 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.201785879s
	I1018 09:16:38.892629  307829 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.779357908s
	W1018 09:16:38.565895  309439 pod_ready.go:104] pod "coredns-66bc5c9577-h44wj" is not "Ready", error: <nil>
	W1018 09:16:41.065990  309439 pod_ready.go:104] pod "coredns-66bc5c9577-h44wj" is not "Ready", error: <nil>
	I1018 09:16:41.117083  307829 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.003684215s
	I1018 09:16:41.169314  307829 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 09:16:41.210944  307829 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 09:16:41.252804  307829 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 09:16:41.253079  307829 kubeadm.go:318] [mark-control-plane] Marking the node default-k8s-diff-port-986220 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 09:16:41.306958  307829 kubeadm.go:318] [bootstrap-token] Using token: f3p04i.qkc1arqowwwf8733
	W1018 09:16:38.611800  295389 node_ready.go:57] node "embed-certs-880603" has "Ready":"False" status (will retry)
	W1018 09:16:40.612030  295389 node_ready.go:57] node "embed-certs-880603" has "Ready":"False" status (will retry)
	I1018 09:16:41.611642  295389 node_ready.go:49] node "embed-certs-880603" is "Ready"
	I1018 09:16:41.611675  295389 node_ready.go:38] duration metric: took 41.503667581s for node "embed-certs-880603" to be "Ready" ...
	I1018 09:16:41.611697  295389 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:16:41.611765  295389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:16:41.627995  295389 api_server.go:72] duration metric: took 41.84915441s to wait for apiserver process to appear ...
	I1018 09:16:41.628025  295389 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:16:41.628048  295389 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 09:16:41.633763  295389 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1018 09:16:41.635685  295389 api_server.go:141] control plane version: v1.34.1
	I1018 09:16:41.635717  295389 api_server.go:131] duration metric: took 7.685454ms to wait for apiserver health ...
	I1018 09:16:41.635728  295389 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:16:41.640645  295389 system_pods.go:59] 8 kube-system pods found
	I1018 09:16:41.640688  295389 system_pods.go:61] "coredns-66bc5c9577-7fnw7" [04bb2d33-29f9-45e9-a6b1-e2b770651c0f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:16:41.640697  295389 system_pods.go:61] "etcd-embed-certs-880603" [da7643b6-9066-4e2f-99eb-c2e6d085f539] Running
	I1018 09:16:41.640707  295389 system_pods.go:61] "kindnet-wzdm5" [20629c75-ca93-46db-875e-49d67c7b3f06] Running
	I1018 09:16:41.640713  295389 system_pods.go:61] "kube-apiserver-embed-certs-880603" [1e4ec0ef-dc43-4939-a733-02690b04d19b] Running
	I1018 09:16:41.640717  295389 system_pods.go:61] "kube-controller-manager-embed-certs-880603" [ccc2c9f0-0b2c-46e5-bea8-c82e8e3124ec] Running
	I1018 09:16:41.640720  295389 system_pods.go:61] "kube-proxy-k4kcs" [83d1821f-468a-4bf0-8fc0-e40e0668f6ff] Running
	I1018 09:16:41.640724  295389 system_pods.go:61] "kube-scheduler-embed-certs-880603" [5635b7e1-dca9-4a7e-8b9c-aa96067fd707] Running
	I1018 09:16:41.640728  295389 system_pods.go:61] "storage-provisioner" [d2aa7a09-3332-4744-9180-d307b4fc8194] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:16:41.640734  295389 system_pods.go:74] duration metric: took 5.000069ms to wait for pod list to return data ...
	I1018 09:16:41.640743  295389 default_sa.go:34] waiting for default service account to be created ...
	I1018 09:16:41.644176  295389 default_sa.go:45] found service account: "default"
	I1018 09:16:41.644203  295389 default_sa.go:55] duration metric: took 3.451989ms for default service account to be created ...
	I1018 09:16:41.644216  295389 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 09:16:41.648178  295389 system_pods.go:86] 8 kube-system pods found
	I1018 09:16:41.648208  295389 system_pods.go:89] "coredns-66bc5c9577-7fnw7" [04bb2d33-29f9-45e9-a6b1-e2b770651c0f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:16:41.648214  295389 system_pods.go:89] "etcd-embed-certs-880603" [da7643b6-9066-4e2f-99eb-c2e6d085f539] Running
	I1018 09:16:41.648220  295389 system_pods.go:89] "kindnet-wzdm5" [20629c75-ca93-46db-875e-49d67c7b3f06] Running
	I1018 09:16:41.648223  295389 system_pods.go:89] "kube-apiserver-embed-certs-880603" [1e4ec0ef-dc43-4939-a733-02690b04d19b] Running
	I1018 09:16:41.648228  295389 system_pods.go:89] "kube-controller-manager-embed-certs-880603" [ccc2c9f0-0b2c-46e5-bea8-c82e8e3124ec] Running
	I1018 09:16:41.648231  295389 system_pods.go:89] "kube-proxy-k4kcs" [83d1821f-468a-4bf0-8fc0-e40e0668f6ff] Running
	I1018 09:16:41.648235  295389 system_pods.go:89] "kube-scheduler-embed-certs-880603" [5635b7e1-dca9-4a7e-8b9c-aa96067fd707] Running
	I1018 09:16:41.648239  295389 system_pods.go:89] "storage-provisioner" [d2aa7a09-3332-4744-9180-d307b4fc8194] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:16:41.648267  295389 retry.go:31] will retry after 192.969575ms: missing components: kube-dns
	I1018 09:16:41.847560  295389 system_pods.go:86] 8 kube-system pods found
	I1018 09:16:41.847602  295389 system_pods.go:89] "coredns-66bc5c9577-7fnw7" [04bb2d33-29f9-45e9-a6b1-e2b770651c0f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:16:41.847612  295389 system_pods.go:89] "etcd-embed-certs-880603" [da7643b6-9066-4e2f-99eb-c2e6d085f539] Running
	I1018 09:16:41.847620  295389 system_pods.go:89] "kindnet-wzdm5" [20629c75-ca93-46db-875e-49d67c7b3f06] Running
	I1018 09:16:41.847626  295389 system_pods.go:89] "kube-apiserver-embed-certs-880603" [1e4ec0ef-dc43-4939-a733-02690b04d19b] Running
	I1018 09:16:41.847633  295389 system_pods.go:89] "kube-controller-manager-embed-certs-880603" [ccc2c9f0-0b2c-46e5-bea8-c82e8e3124ec] Running
	I1018 09:16:41.847637  295389 system_pods.go:89] "kube-proxy-k4kcs" [83d1821f-468a-4bf0-8fc0-e40e0668f6ff] Running
	I1018 09:16:41.847642  295389 system_pods.go:89] "kube-scheduler-embed-certs-880603" [5635b7e1-dca9-4a7e-8b9c-aa96067fd707] Running
	I1018 09:16:41.847646  295389 system_pods.go:89] "storage-provisioner" [d2aa7a09-3332-4744-9180-d307b4fc8194] Running
	I1018 09:16:41.847658  295389 system_pods.go:126] duration metric: took 203.434861ms to wait for k8s-apps to be running ...
	I1018 09:16:41.847708  295389 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 09:16:41.847760  295389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:16:41.864768  295389 system_svc.go:56] duration metric: took 17.051428ms WaitForService to wait for kubelet
	I1018 09:16:41.864801  295389 kubeadm.go:586] duration metric: took 42.085966942s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:16:41.864822  295389 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:16:41.868754  295389 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 09:16:41.868786  295389 node_conditions.go:123] node cpu capacity is 8
	I1018 09:16:41.868806  295389 node_conditions.go:105] duration metric: took 3.977808ms to run NodePressure ...
	I1018 09:16:41.868820  295389 start.go:241] waiting for startup goroutines ...
	I1018 09:16:41.868838  295389 start.go:246] waiting for cluster config update ...
	I1018 09:16:41.868852  295389 start.go:255] writing updated cluster config ...
	I1018 09:16:41.869184  295389 ssh_runner.go:195] Run: rm -f paused
	I1018 09:16:41.873479  295389 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:16:41.877518  295389 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7fnw7" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:41.882231  295389 pod_ready.go:94] pod "coredns-66bc5c9577-7fnw7" is "Ready"
	I1018 09:16:41.882258  295389 pod_ready.go:86] duration metric: took 4.717941ms for pod "coredns-66bc5c9577-7fnw7" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:41.884325  295389 pod_ready.go:83] waiting for pod "etcd-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:41.888331  295389 pod_ready.go:94] pod "etcd-embed-certs-880603" is "Ready"
	I1018 09:16:41.888374  295389 pod_ready.go:86] duration metric: took 3.985545ms for pod "etcd-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:41.890515  295389 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:41.894262  295389 pod_ready.go:94] pod "kube-apiserver-embed-certs-880603" is "Ready"
	I1018 09:16:41.894287  295389 pod_ready.go:86] duration metric: took 3.751424ms for pod "kube-apiserver-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:41.896263  295389 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:41.323567  307829 out.go:252]   - Configuring RBAC rules ...
	I1018 09:16:41.323741  307829 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 09:16:41.323891  307829 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 09:16:41.401828  307829 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 09:16:41.461336  307829 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 09:16:41.465137  307829 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 09:16:41.469707  307829 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 09:16:41.525899  307829 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 09:16:41.942881  307829 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 09:16:42.524571  307829 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 09:16:42.525447  307829 kubeadm.go:318] 
	I1018 09:16:42.525556  307829 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 09:16:42.525568  307829 kubeadm.go:318] 
	I1018 09:16:42.525684  307829 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 09:16:42.525705  307829 kubeadm.go:318] 
	I1018 09:16:42.525741  307829 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 09:16:42.525845  307829 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 09:16:42.525926  307829 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 09:16:42.525935  307829 kubeadm.go:318] 
	I1018 09:16:42.526007  307829 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 09:16:42.526017  307829 kubeadm.go:318] 
	I1018 09:16:42.526086  307829 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 09:16:42.526095  307829 kubeadm.go:318] 
	I1018 09:16:42.526162  307829 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 09:16:42.526271  307829 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 09:16:42.526404  307829 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 09:16:42.526415  307829 kubeadm.go:318] 
	I1018 09:16:42.526533  307829 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 09:16:42.526640  307829 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 09:16:42.526665  307829 kubeadm.go:318] 
	I1018 09:16:42.526797  307829 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8444 --token f3p04i.qkc1arqowwwf8733 \
	I1018 09:16:42.526958  307829 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:03f732b5d900f8eb7de41cf71a6356f3c4edf03d7a3795a959179e2391e7734f \
	I1018 09:16:42.526992  307829 kubeadm.go:318] 	--control-plane 
	I1018 09:16:42.526998  307829 kubeadm.go:318] 
	I1018 09:16:42.527113  307829 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 09:16:42.527135  307829 kubeadm.go:318] 
	I1018 09:16:42.527260  307829 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8444 --token f3p04i.qkc1arqowwwf8733 \
	I1018 09:16:42.527431  307829 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:03f732b5d900f8eb7de41cf71a6356f3c4edf03d7a3795a959179e2391e7734f 
	I1018 09:16:42.530266  307829 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1018 09:16:42.530442  307829 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 09:16:42.530478  307829 cni.go:84] Creating CNI manager for ""
	I1018 09:16:42.530499  307829 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:16:42.533104  307829 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1018 09:16:40.796098  302609 pod_ready.go:104] pod "coredns-5dd5756b68-gwttp" is not "Ready", error: <nil>
	W1018 09:16:43.293707  302609 pod_ready.go:104] pod "coredns-5dd5756b68-gwttp" is not "Ready", error: <nil>
	I1018 09:16:42.277552  295389 pod_ready.go:94] pod "kube-controller-manager-embed-certs-880603" is "Ready"
	I1018 09:16:42.277584  295389 pod_ready.go:86] duration metric: took 381.302407ms for pod "kube-controller-manager-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:42.477778  295389 pod_ready.go:83] waiting for pod "kube-proxy-k4kcs" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:42.878053  295389 pod_ready.go:94] pod "kube-proxy-k4kcs" is "Ready"
	I1018 09:16:42.878082  295389 pod_ready.go:86] duration metric: took 400.281372ms for pod "kube-proxy-k4kcs" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:43.078230  295389 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:43.478123  295389 pod_ready.go:94] pod "kube-scheduler-embed-certs-880603" is "Ready"
	I1018 09:16:43.478149  295389 pod_ready.go:86] duration metric: took 399.897961ms for pod "kube-scheduler-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:43.478161  295389 pod_ready.go:40] duration metric: took 1.604642015s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:16:43.525821  295389 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 09:16:43.527615  295389 out.go:179] * Done! kubectl is now configured to use "embed-certs-880603" cluster and "default" namespace by default
	I1018 09:16:42.534703  307829 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 09:16:42.539385  307829 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 09:16:42.539408  307829 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 09:16:42.553684  307829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 09:16:42.780522  307829 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 09:16:42.780591  307829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:16:42.780624  307829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-986220 minikube.k8s.io/updated_at=2025_10_18T09_16_42_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2a39cecdc22b5fb611b15c7501c7459c3b4d2820 minikube.k8s.io/name=default-k8s-diff-port-986220 minikube.k8s.io/primary=true
	I1018 09:16:42.792392  307829 ops.go:34] apiserver oom_adj: -16
	I1018 09:16:42.879101  307829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:16:43.380201  307829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:16:43.879591  307829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:16:44.380139  307829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1018 09:16:43.565238  309439 pod_ready.go:104] pod "coredns-66bc5c9577-h44wj" is not "Ready", error: <nil>
	W1018 09:16:46.064027  309439 pod_ready.go:104] pod "coredns-66bc5c9577-h44wj" is not "Ready", error: <nil>
	I1018 09:16:44.879531  307829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:16:45.379171  307829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:16:45.880033  307829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:16:46.380235  307829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:16:46.879320  307829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:16:46.954716  307829 kubeadm.go:1113] duration metric: took 4.174183533s to wait for elevateKubeSystemPrivileges
	I1018 09:16:46.954763  307829 kubeadm.go:402] duration metric: took 18.898789866s to StartCluster
	I1018 09:16:46.954787  307829 settings.go:142] acquiring lock: {Name:mk177870d6cf7000f95346d8b9c104ade730278a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:16:46.954887  307829 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-5897/kubeconfig
	I1018 09:16:46.956811  307829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/kubeconfig: {Name:mkbdf22e0f6c3f9f36e4cde3352b620d43ace448 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:16:46.957059  307829 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 09:16:46.957068  307829 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:16:46.957164  307829 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 09:16:46.957257  307829 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-986220"
	I1018 09:16:46.957273  307829 config.go:182] Loaded profile config "default-k8s-diff-port-986220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:16:46.957277  307829 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-986220"
	I1018 09:16:46.957273  307829 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-986220"
	I1018 09:16:46.957302  307829 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-986220"
	I1018 09:16:46.957324  307829 host.go:66] Checking if "default-k8s-diff-port-986220" exists ...
	I1018 09:16:46.957748  307829 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-986220 --format={{.State.Status}}
	I1018 09:16:46.957965  307829 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-986220 --format={{.State.Status}}
	I1018 09:16:46.959069  307829 out.go:179] * Verifying Kubernetes components...
	I1018 09:16:46.960365  307829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:16:46.985389  307829 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:16:46.986830  307829 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:16:46.986853  307829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 09:16:46.986931  307829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-986220
	I1018 09:16:46.987456  307829 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-986220"
	I1018 09:16:46.987508  307829 host.go:66] Checking if "default-k8s-diff-port-986220" exists ...
	I1018 09:16:46.988044  307829 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-986220 --format={{.State.Status}}
	I1018 09:16:47.018839  307829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/default-k8s-diff-port-986220/id_rsa Username:docker}
	I1018 09:16:47.025009  307829 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 09:16:47.025036  307829 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 09:16:47.025097  307829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-986220
	I1018 09:16:47.048123  307829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/default-k8s-diff-port-986220/id_rsa Username:docker}
	I1018 09:16:47.060152  307829 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 09:16:47.123586  307829 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:16:47.141939  307829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:16:47.165247  307829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 09:16:47.266448  307829 start.go:976] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1018 09:16:47.268332  307829 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-986220" to be "Ready" ...
	I1018 09:16:47.477534  307829 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1018 09:16:45.293985  302609 pod_ready.go:104] pod "coredns-5dd5756b68-gwttp" is not "Ready", error: <nil>
	I1018 09:16:46.294258  302609 pod_ready.go:94] pod "coredns-5dd5756b68-gwttp" is "Ready"
	I1018 09:16:46.294287  302609 pod_ready.go:86] duration metric: took 31.006532603s for pod "coredns-5dd5756b68-gwttp" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:46.297373  302609 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-951975" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:46.302066  302609 pod_ready.go:94] pod "etcd-old-k8s-version-951975" is "Ready"
	I1018 09:16:46.302090  302609 pod_ready.go:86] duration metric: took 4.692329ms for pod "etcd-old-k8s-version-951975" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:46.305138  302609 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-951975" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:46.309671  302609 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-951975" is "Ready"
	I1018 09:16:46.309694  302609 pod_ready.go:86] duration metric: took 4.527103ms for pod "kube-apiserver-old-k8s-version-951975" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:46.312739  302609 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-951975" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:46.492306  302609 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-951975" is "Ready"
	I1018 09:16:46.492330  302609 pod_ready.go:86] duration metric: took 179.571371ms for pod "kube-controller-manager-old-k8s-version-951975" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:46.692512  302609 pod_ready.go:83] waiting for pod "kube-proxy-rrzqp" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:47.093588  302609 pod_ready.go:94] pod "kube-proxy-rrzqp" is "Ready"
	I1018 09:16:47.093616  302609 pod_ready.go:86] duration metric: took 401.079405ms for pod "kube-proxy-rrzqp" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:47.294894  302609 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-951975" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:47.692538  302609 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-951975" is "Ready"
	I1018 09:16:47.692575  302609 pod_ready.go:86] duration metric: took 397.645548ms for pod "kube-scheduler-old-k8s-version-951975" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:47.692591  302609 pod_ready.go:40] duration metric: took 32.409896584s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:16:47.741097  302609 start.go:624] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1018 09:16:47.743375  302609 out.go:203] 
	W1018 09:16:47.744703  302609 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1018 09:16:47.745945  302609 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1018 09:16:47.747222  302609 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-951975" cluster and "default" namespace by default
	I1018 09:16:47.478905  307829 addons.go:514] duration metric: took 521.738605ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1018 09:16:47.773018  307829 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-986220" context rescaled to 1 replicas
	W1018 09:16:49.271321  307829 node_ready.go:57] node "default-k8s-diff-port-986220" has "Ready":"False" status (will retry)
	W1018 09:16:48.064660  309439 pod_ready.go:104] pod "coredns-66bc5c9577-h44wj" is not "Ready", error: <nil>
	W1018 09:16:50.564043  309439 pod_ready.go:104] pod "coredns-66bc5c9577-h44wj" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 18 09:16:41 embed-certs-880603 crio[786]: time="2025-10-18T09:16:41.700581941Z" level=info msg="Starting container: ee079582500d1479ab1bb66b12962f16e57721e32e308a8eb97b75d6e14624cf" id=43c45ea1-2aeb-432e-836a-9f27ec1a5e71 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:16:41 embed-certs-880603 crio[786]: time="2025-10-18T09:16:41.703657609Z" level=info msg="Started container" PID=1880 containerID=ee079582500d1479ab1bb66b12962f16e57721e32e308a8eb97b75d6e14624cf description=kube-system/coredns-66bc5c9577-7fnw7/coredns id=43c45ea1-2aeb-432e-836a-9f27ec1a5e71 name=/runtime.v1.RuntimeService/StartContainer sandboxID=90c57a544bf6f64e0778eaebe82a885f9d4fd7d75fd0dff4a8332cbcc75687aa
	Oct 18 09:16:43 embed-certs-880603 crio[786]: time="2025-10-18T09:16:43.976171803Z" level=info msg="Running pod sandbox: default/busybox/POD" id=9a4cd2f7-fdfb-4d97-b423-f48fa6ea7ddb name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:16:43 embed-certs-880603 crio[786]: time="2025-10-18T09:16:43.976285537Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:16:43 embed-certs-880603 crio[786]: time="2025-10-18T09:16:43.981619958Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:ba7b11140322596dc62467ef1cca2cba5cf10e31ed0bf0928c4b5e337db62124 UID:ef4e2065-8b84-4980-be88-6bfeded4c762 NetNS:/var/run/netns/268a6371-2bac-4ecb-8677-d02c54892a80 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000128648}] Aliases:map[]}"
	Oct 18 09:16:43 embed-certs-880603 crio[786]: time="2025-10-18T09:16:43.981659233Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 18 09:16:43 embed-certs-880603 crio[786]: time="2025-10-18T09:16:43.992173273Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:ba7b11140322596dc62467ef1cca2cba5cf10e31ed0bf0928c4b5e337db62124 UID:ef4e2065-8b84-4980-be88-6bfeded4c762 NetNS:/var/run/netns/268a6371-2bac-4ecb-8677-d02c54892a80 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000128648}] Aliases:map[]}"
	Oct 18 09:16:43 embed-certs-880603 crio[786]: time="2025-10-18T09:16:43.992303796Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 18 09:16:43 embed-certs-880603 crio[786]: time="2025-10-18T09:16:43.99315414Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 18 09:16:43 embed-certs-880603 crio[786]: time="2025-10-18T09:16:43.994053516Z" level=info msg="Ran pod sandbox ba7b11140322596dc62467ef1cca2cba5cf10e31ed0bf0928c4b5e337db62124 with infra container: default/busybox/POD" id=9a4cd2f7-fdfb-4d97-b423-f48fa6ea7ddb name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:16:43 embed-certs-880603 crio[786]: time="2025-10-18T09:16:43.995243602Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a16b26c9-d918-4ad5-9eb0-e4328b33256e name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:16:43 embed-certs-880603 crio[786]: time="2025-10-18T09:16:43.995411098Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=a16b26c9-d918-4ad5-9eb0-e4328b33256e name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:16:43 embed-certs-880603 crio[786]: time="2025-10-18T09:16:43.995448718Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=a16b26c9-d918-4ad5-9eb0-e4328b33256e name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:16:43 embed-certs-880603 crio[786]: time="2025-10-18T09:16:43.996235072Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f9c18521-cc43-4a6b-b7ab-3226697e3556 name=/runtime.v1.ImageService/PullImage
	Oct 18 09:16:44 embed-certs-880603 crio[786]: time="2025-10-18T09:16:44.000078195Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 18 09:16:44 embed-certs-880603 crio[786]: time="2025-10-18T09:16:44.744679065Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=f9c18521-cc43-4a6b-b7ab-3226697e3556 name=/runtime.v1.ImageService/PullImage
	Oct 18 09:16:44 embed-certs-880603 crio[786]: time="2025-10-18T09:16:44.745530547Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=754ac83f-6548-4c70-ad09-6d3b3877d67a name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:16:44 embed-certs-880603 crio[786]: time="2025-10-18T09:16:44.746947043Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=428ae104-e2f8-45a4-af3a-81156f0e424b name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:16:44 embed-certs-880603 crio[786]: time="2025-10-18T09:16:44.750199829Z" level=info msg="Creating container: default/busybox/busybox" id=b3a9d024-422b-42cd-9809-8fa64582fea9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:16:44 embed-certs-880603 crio[786]: time="2025-10-18T09:16:44.750942736Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:16:44 embed-certs-880603 crio[786]: time="2025-10-18T09:16:44.754744602Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:16:44 embed-certs-880603 crio[786]: time="2025-10-18T09:16:44.755257891Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:16:44 embed-certs-880603 crio[786]: time="2025-10-18T09:16:44.782148021Z" level=info msg="Created container ac57eb627838033bf39e49173f18d7e270203d03f8717b4c00d4131add793862: default/busybox/busybox" id=b3a9d024-422b-42cd-9809-8fa64582fea9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:16:44 embed-certs-880603 crio[786]: time="2025-10-18T09:16:44.782876091Z" level=info msg="Starting container: ac57eb627838033bf39e49173f18d7e270203d03f8717b4c00d4131add793862" id=7a78c94f-3010-4557-81de-7a083a3820c2 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:16:44 embed-certs-880603 crio[786]: time="2025-10-18T09:16:44.784757048Z" level=info msg="Started container" PID=1952 containerID=ac57eb627838033bf39e49173f18d7e270203d03f8717b4c00d4131add793862 description=default/busybox/busybox id=7a78c94f-3010-4557-81de-7a083a3820c2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ba7b11140322596dc62467ef1cca2cba5cf10e31ed0bf0928c4b5e337db62124
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	ac57eb6278380       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago        Running             busybox                   0                   ba7b111403225       busybox                                      default
	ee079582500d1       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 seconds ago       Running             coredns                   0                   90c57a544bf6f       coredns-66bc5c9577-7fnw7                     kube-system
	c113fa2505a02       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago       Running             storage-provisioner       0                   c4e14ee1c5ce5       storage-provisioner                          kube-system
	509de041d14e3       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      52 seconds ago       Running             kindnet-cni               0                   dc2b6e8fc68be       kindnet-wzdm5                                kube-system
	503aa9a806bb0       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      52 seconds ago       Running             kube-proxy                0                   aeab2cfc0eb98       kube-proxy-k4kcs                             kube-system
	f851ac003d104       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      About a minute ago   Running             kube-apiserver            0                   705b24ab2c420       kube-apiserver-embed-certs-880603            kube-system
	ceefb0fe6b6e2       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      About a minute ago   Running             kube-scheduler            0                   28735dd2316e2       kube-scheduler-embed-certs-880603            kube-system
	f9eb6d4962199       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      About a minute ago   Running             etcd                      0                   f584ef4e15aef       etcd-embed-certs-880603                      kube-system
	ae6aea4f77598       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      About a minute ago   Running             kube-controller-manager   0                   c4f7fce0391d2       kube-controller-manager-embed-certs-880603   kube-system
	
	
	==> coredns [ee079582500d1479ab1bb66b12962f16e57721e32e308a8eb97b75d6e14624cf] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40199 - 38994 "HINFO IN 3685182497891594723.1053756305055645979. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.140329987s
	
	
	==> describe nodes <==
	Name:               embed-certs-880603
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-880603
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a39cecdc22b5fb611b15c7501c7459c3b4d2820
	                    minikube.k8s.io/name=embed-certs-880603
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T09_15_55_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 09:15:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-880603
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 09:16:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 09:16:45 +0000   Sat, 18 Oct 2025 09:15:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 09:16:45 +0000   Sat, 18 Oct 2025 09:15:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 09:16:45 +0000   Sat, 18 Oct 2025 09:15:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 09:16:45 +0000   Sat, 18 Oct 2025 09:16:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-880603
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                8a50a24b-e651-4f1d-8d2e-12e3c28f7fe8
	  Boot ID:                    e8d7ef1f-87bb-488c-8381-e18fe85b484f
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-7fnw7                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     53s
	  kube-system                 etcd-embed-certs-880603                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         59s
	  kube-system                 kindnet-wzdm5                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      53s
	  kube-system                 kube-apiserver-embed-certs-880603             250m (3%)     0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-controller-manager-embed-certs-880603    200m (2%)     0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-proxy-k4kcs                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kube-system                 kube-scheduler-embed-certs-880603             100m (1%)     0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 52s                kube-proxy       
	  Normal  Starting                 64s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  64s (x8 over 64s)  kubelet          Node embed-certs-880603 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    64s (x8 over 64s)  kubelet          Node embed-certs-880603 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     64s (x8 over 64s)  kubelet          Node embed-certs-880603 status is now: NodeHasSufficientPID
	  Normal  Starting                 59s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s                kubelet          Node embed-certs-880603 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s                kubelet          Node embed-certs-880603 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s                kubelet          Node embed-certs-880603 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           54s                node-controller  Node embed-certs-880603 event: Registered Node embed-certs-880603 in Controller
	  Normal  NodeReady                12s                kubelet          Node embed-certs-880603 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3e 73 e0 1e 4f 5c 08 06
	[  +0.001176] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9e 01 6a be c1 ed 08 06
	[  +1.096145] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 92 07 d0 c5 bc 08 06
	[  +0.000393] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff de 8d 0a a3 cc 78 08 06
	[ +17.591772] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 8a 16 36 e8 43 c0 08 06
	[  +0.000413] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3e 73 e0 1e 4f 5c 08 06
	[ +11.820741] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 36 5b 8e 46 ea 47 08 06
	[Oct18 09:15] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 28 86 11 d4 e9 08 06
	[  +0.032974] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 76 2d 83 26 2e 28 08 06
	[  +4.435535] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 22 e2 07 5a 3b 4a 08 06
	[  +0.000382] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 36 5b 8e 46 ea 47 08 06
	[ +43.809014] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 22 6f 4b 2b 7f 46 08 06
	[  +0.000367] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 62 28 86 11 d4 e9 08 06
	
	
	==> etcd [f9eb6d4962199e2414f215b751ce0dc51aaf706f98379421ebe2da0e0fc0187c] <==
	{"level":"warn","ts":"2025-10-18T09:15:51.338474Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:15:51.346272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:15:51.354478Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:15:51.362299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:15:51.370968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:15:51.387313Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:15:51.395218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:15:51.403954Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:15:51.414421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:15:51.422620Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:15:51.432104Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:15:51.441104Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:15:51.450005Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:15:51.458231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:15:51.466025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:15:51.472631Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:15:51.480462Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:15:51.488268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:15:51.499638Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:15:51.508612Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:15:51.518700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:15:51.579132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:19.027918Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"124.635528ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356040525189413 > lease_revoke:<id:590699f69a9690cc>","response":"size:28"}
	{"level":"warn","ts":"2025-10-18T09:16:41.454075Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"101.566089ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356040525189541 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/storage-provisioner\" mod_revision:434 > success:<request_put:<key:\"/registry/pods/kube-system/storage-provisioner\" value_size:4263 >> failure:<request_range:<key:\"/registry/pods/kube-system/storage-provisioner\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-18T09:16:41.454186Z","caller":"traceutil/trace.go:172","msg":"trace[1216795553] transaction","detail":"{read_only:false; response_revision:438; number_of_response:1; }","duration":"145.131046ms","start":"2025-10-18T09:16:41.309039Z","end":"2025-10-18T09:16:41.454170Z","steps":["trace[1216795553] 'process raft request'  (duration: 42.992318ms)","trace[1216795553] 'compare'  (duration: 101.34995ms)"],"step_count":2}
	
	
	==> kernel <==
	 09:16:53 up 59 min,  0 user,  load average: 3.98, 3.50, 2.37
	Linux embed-certs-880603 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [509de041d14e36770850bcdc79db6c99b0b34c359f2aabb057cab608a35748c1] <==
	I1018 09:16:00.714562       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 09:16:00.714899       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1018 09:16:00.715074       1 main.go:148] setting mtu 1500 for CNI 
	I1018 09:16:00.715093       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 09:16:00.715117       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T09:16:00Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 09:16:00.922963       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 09:16:00.922990       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 09:16:00.923002       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 09:16:00.923138       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1018 09:16:30.924160       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1018 09:16:30.924232       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1018 09:16:30.924384       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1018 09:16:30.924532       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1018 09:16:32.323209       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 09:16:32.323245       1 metrics.go:72] Registering metrics
	I1018 09:16:32.323404       1 controller.go:711] "Syncing nftables rules"
	I1018 09:16:40.926426       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 09:16:40.926480       1 main.go:301] handling current node
	I1018 09:16:50.926483       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 09:16:50.926523       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f851ac003d1045602c6672ad4684d62633aed7fb263e3adc44d8e08b1594b24a] <==
	I1018 09:15:52.092941       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1018 09:15:52.093668       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 09:15:52.101002       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1018 09:15:52.103024       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:15:52.112875       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:15:52.113560       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 09:15:52.292432       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 09:15:52.994071       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1018 09:15:52.998242       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1018 09:15:52.998258       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 09:15:53.533962       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 09:15:53.573791       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 09:15:53.699704       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1018 09:15:53.706179       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1018 09:15:53.707376       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 09:15:53.712021       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 09:15:54.008957       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 09:15:54.764609       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 09:15:54.774848       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1018 09:15:54.785806       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 09:15:59.662834       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:15:59.667805       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:15:59.866921       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 09:16:00.113506       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1018 09:16:51.765406       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:42504: use of closed network connection
	
	
	==> kube-controller-manager [ae6aea4f77598059c49733211e8e1a76eac5dc0eb368b119365fcaad6072ec64] <==
	I1018 09:15:59.008417       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 09:15:59.008535       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1018 09:15:59.008546       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 09:15:59.008580       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1018 09:15:59.008639       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 09:15:59.008646       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 09:15:59.008667       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-880603"
	I1018 09:15:59.008708       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 09:15:59.008709       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 09:15:59.008684       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 09:15:59.008766       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1018 09:15:59.009903       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 09:15:59.012187       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 09:15:59.012237       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 09:15:59.013614       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:15:59.013625       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1018 09:15:59.013678       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 09:15:59.013720       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 09:15:59.013732       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 09:15:59.013738       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 09:15:59.015014       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:15:59.020420       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-880603" podCIDRs=["10.244.0.0/24"]
	I1018 09:15:59.022389       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 09:15:59.030704       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 09:16:44.016041       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [503aa9a806bb0993faac27fde97198a8ca881a64cfb08f5b7613211ed5dc78f0] <==
	I1018 09:16:00.541604       1 server_linux.go:53] "Using iptables proxy"
	I1018 09:16:00.603850       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 09:16:00.704645       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 09:16:00.704686       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1018 09:16:00.704781       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 09:16:00.730882       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 09:16:00.730960       1 server_linux.go:132] "Using iptables Proxier"
	I1018 09:16:00.737073       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 09:16:00.737622       1 server.go:527] "Version info" version="v1.34.1"
	I1018 09:16:00.737659       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:16:00.739638       1 config.go:200] "Starting service config controller"
	I1018 09:16:00.739714       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 09:16:00.739797       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 09:16:00.739832       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 09:16:00.739841       1 config.go:309] "Starting node config controller"
	I1018 09:16:00.739928       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 09:16:00.739936       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 09:16:00.739841       1 config.go:106] "Starting endpoint slice config controller"
	I1018 09:16:00.739945       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 09:16:00.839895       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 09:16:00.839976       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 09:16:00.841178       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [ceefb0fe6b6e2655cde2a1786053a0ad1098f36f06d2eb55dcd8685390d103dc] <==
	E1018 09:15:52.168251       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 09:15:52.168264       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 09:15:52.168406       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 09:15:52.168506       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 09:15:52.168589       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 09:15:52.168706       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 09:15:52.168806       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 09:15:52.168832       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 09:15:52.168886       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 09:15:52.168991       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 09:15:52.169457       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 09:15:52.169536       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 09:15:53.114015       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 09:15:53.181292       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 09:15:53.187461       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 09:15:53.196748       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 09:15:53.197575       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 09:15:53.216072       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1018 09:15:53.235288       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 09:15:53.245494       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 09:15:53.305992       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 09:15:53.317241       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 09:15:53.323508       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 09:15:53.333820       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1018 09:15:55.063510       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 09:15:55 embed-certs-880603 kubelet[1348]: I1018 09:15:55.720772    1348 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-880603" podStartSLOduration=1.7207490380000001 podStartE2EDuration="1.720749038s" podCreationTimestamp="2025-10-18 09:15:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:15:55.710681695 +0000 UTC m=+1.137093671" watchObservedRunningTime="2025-10-18 09:15:55.720749038 +0000 UTC m=+1.147161014"
	Oct 18 09:15:55 embed-certs-880603 kubelet[1348]: I1018 09:15:55.730090    1348 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-880603" podStartSLOduration=1.730073635 podStartE2EDuration="1.730073635s" podCreationTimestamp="2025-10-18 09:15:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:15:55.720993854 +0000 UTC m=+1.147405828" watchObservedRunningTime="2025-10-18 09:15:55.730073635 +0000 UTC m=+1.156485613"
	Oct 18 09:15:55 embed-certs-880603 kubelet[1348]: I1018 09:15:55.738882    1348 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-880603" podStartSLOduration=1.73886057 podStartE2EDuration="1.73886057s" podCreationTimestamp="2025-10-18 09:15:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:15:55.729976624 +0000 UTC m=+1.156388602" watchObservedRunningTime="2025-10-18 09:15:55.73886057 +0000 UTC m=+1.165272548"
	Oct 18 09:15:55 embed-certs-880603 kubelet[1348]: I1018 09:15:55.748410    1348 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-880603" podStartSLOduration=1.748385603 podStartE2EDuration="1.748385603s" podCreationTimestamp="2025-10-18 09:15:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:15:55.738698483 +0000 UTC m=+1.165110459" watchObservedRunningTime="2025-10-18 09:15:55.748385603 +0000 UTC m=+1.174797570"
	Oct 18 09:15:59 embed-certs-880603 kubelet[1348]: I1018 09:15:59.093482    1348 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 18 09:15:59 embed-certs-880603 kubelet[1348]: I1018 09:15:59.094166    1348 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 18 09:16:00 embed-certs-880603 kubelet[1348]: I1018 09:16:00.189915    1348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/83d1821f-468a-4bf0-8fc0-e40e0668f6ff-lib-modules\") pod \"kube-proxy-k4kcs\" (UID: \"83d1821f-468a-4bf0-8fc0-e40e0668f6ff\") " pod="kube-system/kube-proxy-k4kcs"
	Oct 18 09:16:00 embed-certs-880603 kubelet[1348]: I1018 09:16:00.189982    1348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b679w\" (UniqueName: \"kubernetes.io/projected/83d1821f-468a-4bf0-8fc0-e40e0668f6ff-kube-api-access-b679w\") pod \"kube-proxy-k4kcs\" (UID: \"83d1821f-468a-4bf0-8fc0-e40e0668f6ff\") " pod="kube-system/kube-proxy-k4kcs"
	Oct 18 09:16:00 embed-certs-880603 kubelet[1348]: I1018 09:16:00.190022    1348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/20629c75-ca93-46db-875e-49d67c7b3f06-xtables-lock\") pod \"kindnet-wzdm5\" (UID: \"20629c75-ca93-46db-875e-49d67c7b3f06\") " pod="kube-system/kindnet-wzdm5"
	Oct 18 09:16:00 embed-certs-880603 kubelet[1348]: I1018 09:16:00.190041    1348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/20629c75-ca93-46db-875e-49d67c7b3f06-lib-modules\") pod \"kindnet-wzdm5\" (UID: \"20629c75-ca93-46db-875e-49d67c7b3f06\") " pod="kube-system/kindnet-wzdm5"
	Oct 18 09:16:00 embed-certs-880603 kubelet[1348]: I1018 09:16:00.190061    1348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7cfl\" (UniqueName: \"kubernetes.io/projected/20629c75-ca93-46db-875e-49d67c7b3f06-kube-api-access-j7cfl\") pod \"kindnet-wzdm5\" (UID: \"20629c75-ca93-46db-875e-49d67c7b3f06\") " pod="kube-system/kindnet-wzdm5"
	Oct 18 09:16:00 embed-certs-880603 kubelet[1348]: I1018 09:16:00.190090    1348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/20629c75-ca93-46db-875e-49d67c7b3f06-cni-cfg\") pod \"kindnet-wzdm5\" (UID: \"20629c75-ca93-46db-875e-49d67c7b3f06\") " pod="kube-system/kindnet-wzdm5"
	Oct 18 09:16:00 embed-certs-880603 kubelet[1348]: I1018 09:16:00.190109    1348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/83d1821f-468a-4bf0-8fc0-e40e0668f6ff-kube-proxy\") pod \"kube-proxy-k4kcs\" (UID: \"83d1821f-468a-4bf0-8fc0-e40e0668f6ff\") " pod="kube-system/kube-proxy-k4kcs"
	Oct 18 09:16:00 embed-certs-880603 kubelet[1348]: I1018 09:16:00.190132    1348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/83d1821f-468a-4bf0-8fc0-e40e0668f6ff-xtables-lock\") pod \"kube-proxy-k4kcs\" (UID: \"83d1821f-468a-4bf0-8fc0-e40e0668f6ff\") " pod="kube-system/kube-proxy-k4kcs"
	Oct 18 09:16:00 embed-certs-880603 kubelet[1348]: I1018 09:16:00.718547    1348 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-wzdm5" podStartSLOduration=0.718525477 podStartE2EDuration="718.525477ms" podCreationTimestamp="2025-10-18 09:16:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:16:00.717833345 +0000 UTC m=+6.144245321" watchObservedRunningTime="2025-10-18 09:16:00.718525477 +0000 UTC m=+6.144937455"
	Oct 18 09:16:00 embed-certs-880603 kubelet[1348]: I1018 09:16:00.745808    1348 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-k4kcs" podStartSLOduration=0.745789402 podStartE2EDuration="745.789402ms" podCreationTimestamp="2025-10-18 09:16:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:16:00.745724817 +0000 UTC m=+6.172136793" watchObservedRunningTime="2025-10-18 09:16:00.745789402 +0000 UTC m=+6.172201379"
	Oct 18 09:16:41 embed-certs-880603 kubelet[1348]: I1018 09:16:41.129894    1348 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 18 09:16:41 embed-certs-880603 kubelet[1348]: I1018 09:16:41.374855    1348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm8r5\" (UniqueName: \"kubernetes.io/projected/04bb2d33-29f9-45e9-a6b1-e2b770651c0f-kube-api-access-nm8r5\") pod \"coredns-66bc5c9577-7fnw7\" (UID: \"04bb2d33-29f9-45e9-a6b1-e2b770651c0f\") " pod="kube-system/coredns-66bc5c9577-7fnw7"
	Oct 18 09:16:41 embed-certs-880603 kubelet[1348]: I1018 09:16:41.374916    1348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d2aa7a09-3332-4744-9180-d307b4fc8194-tmp\") pod \"storage-provisioner\" (UID: \"d2aa7a09-3332-4744-9180-d307b4fc8194\") " pod="kube-system/storage-provisioner"
	Oct 18 09:16:41 embed-certs-880603 kubelet[1348]: I1018 09:16:41.374932    1348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk5w4\" (UniqueName: \"kubernetes.io/projected/d2aa7a09-3332-4744-9180-d307b4fc8194-kube-api-access-dk5w4\") pod \"storage-provisioner\" (UID: \"d2aa7a09-3332-4744-9180-d307b4fc8194\") " pod="kube-system/storage-provisioner"
	Oct 18 09:16:41 embed-certs-880603 kubelet[1348]: I1018 09:16:41.374947    1348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04bb2d33-29f9-45e9-a6b1-e2b770651c0f-config-volume\") pod \"coredns-66bc5c9577-7fnw7\" (UID: \"04bb2d33-29f9-45e9-a6b1-e2b770651c0f\") " pod="kube-system/coredns-66bc5c9577-7fnw7"
	Oct 18 09:16:41 embed-certs-880603 kubelet[1348]: I1018 09:16:41.845625    1348 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=41.845600207 podStartE2EDuration="41.845600207s" podCreationTimestamp="2025-10-18 09:16:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:16:41.845144264 +0000 UTC m=+47.271556266" watchObservedRunningTime="2025-10-18 09:16:41.845600207 +0000 UTC m=+47.272012183"
	Oct 18 09:16:41 embed-certs-880603 kubelet[1348]: I1018 09:16:41.845731    1348 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-7fnw7" podStartSLOduration=41.845725631 podStartE2EDuration="41.845725631s" podCreationTimestamp="2025-10-18 09:16:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:16:41.829994819 +0000 UTC m=+47.256406809" watchObservedRunningTime="2025-10-18 09:16:41.845725631 +0000 UTC m=+47.272137606"
	Oct 18 09:16:43 embed-certs-880603 kubelet[1348]: I1018 09:16:43.791754    1348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddjmn\" (UniqueName: \"kubernetes.io/projected/ef4e2065-8b84-4980-be88-6bfeded4c762-kube-api-access-ddjmn\") pod \"busybox\" (UID: \"ef4e2065-8b84-4980-be88-6bfeded4c762\") " pod="default/busybox"
	Oct 18 09:16:44 embed-certs-880603 kubelet[1348]: I1018 09:16:44.830605    1348 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.079978199 podStartE2EDuration="1.830578607s" podCreationTimestamp="2025-10-18 09:16:43 +0000 UTC" firstStartedPulling="2025-10-18 09:16:43.995738532 +0000 UTC m=+49.422150488" lastFinishedPulling="2025-10-18 09:16:44.74633894 +0000 UTC m=+50.172750896" observedRunningTime="2025-10-18 09:16:44.830378297 +0000 UTC m=+50.256790272" watchObservedRunningTime="2025-10-18 09:16:44.830578607 +0000 UTC m=+50.256990583"
	
	
	==> storage-provisioner [c113fa2505a02e043ee3cf1c7f33ffc7c4aeae8d3d0f080917def831deb3a96d] <==
	I1018 09:16:41.684026       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 09:16:41.693255       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 09:16:41.693314       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 09:16:41.696635       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:16:41.702942       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 09:16:41.703156       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 09:16:41.703451       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-880603_034e147b-7f6f-4753-be71-5f49d91ed7e5!
	I1018 09:16:41.703797       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5e8e0df5-6f93-48eb-99a3-eaa105313a85", APIVersion:"v1", ResourceVersion:"444", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-880603_034e147b-7f6f-4753-be71-5f49d91ed7e5 became leader
	W1018 09:16:41.708011       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:16:41.717114       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 09:16:41.805127       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-880603_034e147b-7f6f-4753-be71-5f49d91ed7e5!
	W1018 09:16:43.720302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:16:43.725778       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:16:45.728494       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:16:45.733773       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:16:47.737396       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:16:47.743599       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:16:49.747624       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:16:49.752013       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:16:51.756046       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:16:51.760783       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-880603 -n embed-certs-880603
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-880603 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-951975 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-951975 --alsologtostderr -v=1: exit status 80 (2.364062708s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-951975 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:16:59.491537  315254 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:16:59.491816  315254 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:16:59.491827  315254 out.go:374] Setting ErrFile to fd 2...
	I1018 09:16:59.491833  315254 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:16:59.492061  315254 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-5897/.minikube/bin
	I1018 09:16:59.492332  315254 out.go:368] Setting JSON to false
	I1018 09:16:59.492404  315254 mustload.go:65] Loading cluster: old-k8s-version-951975
	I1018 09:16:59.492745  315254 config.go:182] Loaded profile config "old-k8s-version-951975": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1018 09:16:59.493149  315254 cli_runner.go:164] Run: docker container inspect old-k8s-version-951975 --format={{.State.Status}}
	I1018 09:16:59.512081  315254 host.go:66] Checking if "old-k8s-version-951975" exists ...
	I1018 09:16:59.512441  315254 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:16:59.575703  315254 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:87 OomKillDisable:false NGoroutines:93 SystemTime:2025-10-18 09:16:59.563866267 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:16:59.576385  315254 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-951975 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1018 09:16:59.578596  315254 out.go:179] * Pausing node old-k8s-version-951975 ... 
	I1018 09:16:59.579779  315254 host.go:66] Checking if "old-k8s-version-951975" exists ...
	I1018 09:16:59.580091  315254 ssh_runner.go:195] Run: systemctl --version
	I1018 09:16:59.580131  315254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-951975
	I1018 09:16:59.599412  315254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/old-k8s-version-951975/id_rsa Username:docker}
	I1018 09:16:59.699545  315254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:16:59.714558  315254 pause.go:52] kubelet running: true
	I1018 09:16:59.714628  315254 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 09:16:59.901382  315254 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 09:16:59.901472  315254 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 09:16:59.972693  315254 cri.go:89] found id: "dd5dcb7d66045a0152ff8c078146a86a98fcc49d2df4f7b6dd15a94d89058078"
	I1018 09:16:59.972716  315254 cri.go:89] found id: "850ecec987439ee84e6448cada291df9cce48b7f0c730a4f0638f43a13af3bc0"
	I1018 09:16:59.972721  315254 cri.go:89] found id: "e13798224f38d89856ac0d589f0dbef9694affee05d404ed03bc1423d5b36d66"
	I1018 09:16:59.972728  315254 cri.go:89] found id: "37cdebb50e3452e3797b2554403f29b4e05357c580e8231a729dc63a87d0f932"
	I1018 09:16:59.972732  315254 cri.go:89] found id: "c707d266c99b4e19a4a07275b8e8367a1594b6cf94012a72f161afb9027cd1cf"
	I1018 09:16:59.972737  315254 cri.go:89] found id: "16a4a0198ff18096f38de4bc58c31bf5f03bdf37076c3c4e4d32e4fb7d38b886"
	I1018 09:16:59.972742  315254 cri.go:89] found id: "b2de01dc9072ccffefae3182aec6a17d04655623980355d4f88424a0d4e01818"
	I1018 09:16:59.972746  315254 cri.go:89] found id: "f2e7310b9fd30510062cf4fc3f3196d0199a8bb693ccf374fe7926da05bc717a"
	I1018 09:16:59.972749  315254 cri.go:89] found id: "1d1d7b9a4603835edfbcabef69e64877d18a1499301245bf79771003e000b780"
	I1018 09:16:59.972756  315254 cri.go:89] found id: "f59a4536a6be07648a7d886609196032c6dd09725a1a2250b74c79dd1ca7a6ee"
	I1018 09:16:59.972761  315254 cri.go:89] found id: "1731b366ef3fded158839dbcd6cc44068387d425b2e39024818c85643cff484e"
	I1018 09:16:59.972765  315254 cri.go:89] found id: ""
	I1018 09:16:59.972810  315254 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:16:59.985257  315254 retry.go:31] will retry after 344.35942ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:16:59Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:17:00.329872  315254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:17:00.344236  315254 pause.go:52] kubelet running: false
	I1018 09:17:00.344295  315254 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 09:17:00.494685  315254 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 09:17:00.494782  315254 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 09:17:00.565914  315254 cri.go:89] found id: "dd5dcb7d66045a0152ff8c078146a86a98fcc49d2df4f7b6dd15a94d89058078"
	I1018 09:17:00.565938  315254 cri.go:89] found id: "850ecec987439ee84e6448cada291df9cce48b7f0c730a4f0638f43a13af3bc0"
	I1018 09:17:00.565944  315254 cri.go:89] found id: "e13798224f38d89856ac0d589f0dbef9694affee05d404ed03bc1423d5b36d66"
	I1018 09:17:00.565949  315254 cri.go:89] found id: "37cdebb50e3452e3797b2554403f29b4e05357c580e8231a729dc63a87d0f932"
	I1018 09:17:00.565953  315254 cri.go:89] found id: "c707d266c99b4e19a4a07275b8e8367a1594b6cf94012a72f161afb9027cd1cf"
	I1018 09:17:00.565958  315254 cri.go:89] found id: "16a4a0198ff18096f38de4bc58c31bf5f03bdf37076c3c4e4d32e4fb7d38b886"
	I1018 09:17:00.565962  315254 cri.go:89] found id: "b2de01dc9072ccffefae3182aec6a17d04655623980355d4f88424a0d4e01818"
	I1018 09:17:00.565966  315254 cri.go:89] found id: "f2e7310b9fd30510062cf4fc3f3196d0199a8bb693ccf374fe7926da05bc717a"
	I1018 09:17:00.565970  315254 cri.go:89] found id: "1d1d7b9a4603835edfbcabef69e64877d18a1499301245bf79771003e000b780"
	I1018 09:17:00.565991  315254 cri.go:89] found id: "f59a4536a6be07648a7d886609196032c6dd09725a1a2250b74c79dd1ca7a6ee"
	I1018 09:17:00.566000  315254 cri.go:89] found id: "1731b366ef3fded158839dbcd6cc44068387d425b2e39024818c85643cff484e"
	I1018 09:17:00.566004  315254 cri.go:89] found id: ""
	I1018 09:17:00.566053  315254 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:17:00.579014  315254 retry.go:31] will retry after 215.043035ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:17:00Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:17:00.794330  315254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:17:00.808599  315254 pause.go:52] kubelet running: false
	I1018 09:17:00.808651  315254 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 09:17:00.961094  315254 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 09:17:00.961195  315254 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 09:17:01.039105  315254 cri.go:89] found id: "dd5dcb7d66045a0152ff8c078146a86a98fcc49d2df4f7b6dd15a94d89058078"
	I1018 09:17:01.039129  315254 cri.go:89] found id: "850ecec987439ee84e6448cada291df9cce48b7f0c730a4f0638f43a13af3bc0"
	I1018 09:17:01.039134  315254 cri.go:89] found id: "e13798224f38d89856ac0d589f0dbef9694affee05d404ed03bc1423d5b36d66"
	I1018 09:17:01.039137  315254 cri.go:89] found id: "37cdebb50e3452e3797b2554403f29b4e05357c580e8231a729dc63a87d0f932"
	I1018 09:17:01.039139  315254 cri.go:89] found id: "c707d266c99b4e19a4a07275b8e8367a1594b6cf94012a72f161afb9027cd1cf"
	I1018 09:17:01.039144  315254 cri.go:89] found id: "16a4a0198ff18096f38de4bc58c31bf5f03bdf37076c3c4e4d32e4fb7d38b886"
	I1018 09:17:01.039148  315254 cri.go:89] found id: "b2de01dc9072ccffefae3182aec6a17d04655623980355d4f88424a0d4e01818"
	I1018 09:17:01.039153  315254 cri.go:89] found id: "f2e7310b9fd30510062cf4fc3f3196d0199a8bb693ccf374fe7926da05bc717a"
	I1018 09:17:01.039158  315254 cri.go:89] found id: "1d1d7b9a4603835edfbcabef69e64877d18a1499301245bf79771003e000b780"
	I1018 09:17:01.039166  315254 cri.go:89] found id: "f59a4536a6be07648a7d886609196032c6dd09725a1a2250b74c79dd1ca7a6ee"
	I1018 09:17:01.039174  315254 cri.go:89] found id: "1731b366ef3fded158839dbcd6cc44068387d425b2e39024818c85643cff484e"
	I1018 09:17:01.039178  315254 cri.go:89] found id: ""
	I1018 09:17:01.039241  315254 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:17:01.052330  315254 retry.go:31] will retry after 489.793267ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:17:01Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:17:01.543070  315254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:17:01.557410  315254 pause.go:52] kubelet running: false
	I1018 09:17:01.557474  315254 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 09:17:01.710823  315254 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 09:17:01.710897  315254 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 09:17:01.782710  315254 cri.go:89] found id: "dd5dcb7d66045a0152ff8c078146a86a98fcc49d2df4f7b6dd15a94d89058078"
	I1018 09:17:01.782735  315254 cri.go:89] found id: "850ecec987439ee84e6448cada291df9cce48b7f0c730a4f0638f43a13af3bc0"
	I1018 09:17:01.782782  315254 cri.go:89] found id: "e13798224f38d89856ac0d589f0dbef9694affee05d404ed03bc1423d5b36d66"
	I1018 09:17:01.782799  315254 cri.go:89] found id: "37cdebb50e3452e3797b2554403f29b4e05357c580e8231a729dc63a87d0f932"
	I1018 09:17:01.782805  315254 cri.go:89] found id: "c707d266c99b4e19a4a07275b8e8367a1594b6cf94012a72f161afb9027cd1cf"
	I1018 09:17:01.782809  315254 cri.go:89] found id: "16a4a0198ff18096f38de4bc58c31bf5f03bdf37076c3c4e4d32e4fb7d38b886"
	I1018 09:17:01.782811  315254 cri.go:89] found id: "b2de01dc9072ccffefae3182aec6a17d04655623980355d4f88424a0d4e01818"
	I1018 09:17:01.782813  315254 cri.go:89] found id: "f2e7310b9fd30510062cf4fc3f3196d0199a8bb693ccf374fe7926da05bc717a"
	I1018 09:17:01.782816  315254 cri.go:89] found id: "1d1d7b9a4603835edfbcabef69e64877d18a1499301245bf79771003e000b780"
	I1018 09:17:01.782822  315254 cri.go:89] found id: "f59a4536a6be07648a7d886609196032c6dd09725a1a2250b74c79dd1ca7a6ee"
	I1018 09:17:01.782825  315254 cri.go:89] found id: "1731b366ef3fded158839dbcd6cc44068387d425b2e39024818c85643cff484e"
	I1018 09:17:01.782828  315254 cri.go:89] found id: ""
	I1018 09:17:01.782865  315254 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:17:01.797397  315254 out.go:203] 
	W1018 09:17:01.798724  315254 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:17:01Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:17:01Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 09:17:01.798744  315254 out.go:285] * 
	* 
	W1018 09:17:01.802949  315254 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 09:17:01.804236  315254 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-951975 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-951975
helpers_test.go:243: (dbg) docker inspect old-k8s-version-951975:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d0100f52d1269537aed479fa34a959a6c66c92a27d1fccddcac8f2b32127e866",
	        "Created": "2025-10-18T09:14:48.164862927Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 302921,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T09:16:03.734445588Z",
	            "FinishedAt": "2025-10-18T09:16:02.821932847Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/d0100f52d1269537aed479fa34a959a6c66c92a27d1fccddcac8f2b32127e866/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d0100f52d1269537aed479fa34a959a6c66c92a27d1fccddcac8f2b32127e866/hostname",
	        "HostsPath": "/var/lib/docker/containers/d0100f52d1269537aed479fa34a959a6c66c92a27d1fccddcac8f2b32127e866/hosts",
	        "LogPath": "/var/lib/docker/containers/d0100f52d1269537aed479fa34a959a6c66c92a27d1fccddcac8f2b32127e866/d0100f52d1269537aed479fa34a959a6c66c92a27d1fccddcac8f2b32127e866-json.log",
	        "Name": "/old-k8s-version-951975",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-951975:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-951975",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d0100f52d1269537aed479fa34a959a6c66c92a27d1fccddcac8f2b32127e866",
	                "LowerDir": "/var/lib/docker/overlay2/e7302d3490bbf936dd2f2d552ba2ba9dcf7d4bb0152646a3d6445c572600b324-init/diff:/var/lib/docker/overlay2/76f783f469ac4c930bc111d7df4bd2b3a57bdcd762971c7ce0ba7a7b959771a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e7302d3490bbf936dd2f2d552ba2ba9dcf7d4bb0152646a3d6445c572600b324/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e7302d3490bbf936dd2f2d552ba2ba9dcf7d4bb0152646a3d6445c572600b324/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e7302d3490bbf936dd2f2d552ba2ba9dcf7d4bb0152646a3d6445c572600b324/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-951975",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-951975/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-951975",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-951975",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-951975",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3cd13497a022261452eb5d55c790262e06bba8e434c0d50f8a561ab6c128fa72",
	            "SandboxKey": "/var/run/docker/netns/3cd13497a022",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-951975": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:02:bf:15:d1:69",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "24bc48639b258a05e4ef01c1cdad81fb398d660a6740ed3b45a916093c5c2afe",
	                    "EndpointID": "d60f8d93e6ca94ac31f061dbcf0c07af2906bb8301000001bd6ebc63a7b68d1d",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-951975",
	                        "d0100f52d126"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-951975 -n old-k8s-version-951975
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-951975 -n old-k8s-version-951975: exit status 2 (350.882147ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-951975 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-951975 logs -n 25: (1.188291769s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p enable-default-cni-448954 sudo cat /etc/docker/daemon.json                                                                                                            │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │                     │
	│ ssh     │ -p enable-default-cni-448954 sudo docker system info                                                                                                                     │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │                     │
	│ ssh     │ -p enable-default-cni-448954 sudo systemctl status cri-docker --all --full --no-pager                                                                                    │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │                     │
	│ ssh     │ -p enable-default-cni-448954 sudo systemctl cat cri-docker --no-pager                                                                                                    │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ ssh     │ -p enable-default-cni-448954 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                               │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │                     │
	│ ssh     │ -p enable-default-cni-448954 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                         │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ ssh     │ -p enable-default-cni-448954 sudo cri-dockerd --version                                                                                                                  │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ ssh     │ -p enable-default-cni-448954 sudo systemctl status containerd --all --full --no-pager                                                                                    │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │                     │
	│ ssh     │ -p enable-default-cni-448954 sudo systemctl cat containerd --no-pager                                                                                                    │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ ssh     │ -p enable-default-cni-448954 sudo cat /lib/systemd/system/containerd.service                                                                                             │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ ssh     │ -p enable-default-cni-448954 sudo cat /etc/containerd/config.toml                                                                                                        │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ ssh     │ -p enable-default-cni-448954 sudo containerd config dump                                                                                                                 │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ ssh     │ -p enable-default-cni-448954 sudo systemctl status crio --all --full --no-pager                                                                                          │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ ssh     │ -p enable-default-cni-448954 sudo systemctl cat crio --no-pager                                                                                                          │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ ssh     │ -p enable-default-cni-448954 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ ssh     │ -p enable-default-cni-448954 sudo crio config                                                                                                                            │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ delete  │ -p enable-default-cni-448954                                                                                                                                             │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ delete  │ -p disable-driver-mounts-634520                                                                                                                                          │ disable-driver-mounts-634520 │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ start   │ -p default-k8s-diff-port-986220 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-986220 │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:17 UTC │
	│ addons  │ enable dashboard -p no-preload-031066 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                             │ no-preload-031066            │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ start   │ -p no-preload-031066 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                  │ no-preload-031066            │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-880603 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-880603           │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │                     │
	│ stop    │ -p embed-certs-880603 --alsologtostderr -v=3                                                                                                                             │ embed-certs-880603           │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │                     │
	│ image   │ old-k8s-version-951975 image list --format=json                                                                                                                          │ old-k8s-version-951975       │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ pause   │ -p old-k8s-version-951975 --alsologtostderr -v=1                                                                                                                         │ old-k8s-version-951975       │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 09:16:21
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 09:16:21.259556  309439 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:16:21.259842  309439 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:16:21.259853  309439 out.go:374] Setting ErrFile to fd 2...
	I1018 09:16:21.259859  309439 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:16:21.260111  309439 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-5897/.minikube/bin
	I1018 09:16:21.260632  309439 out.go:368] Setting JSON to false
	I1018 09:16:21.261865  309439 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3529,"bootTime":1760775452,"procs":325,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 09:16:21.261961  309439 start.go:141] virtualization: kvm guest
	I1018 09:16:21.264134  309439 out.go:179] * [no-preload-031066] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 09:16:21.265731  309439 out.go:179]   - MINIKUBE_LOCATION=21767
	I1018 09:16:21.265725  309439 notify.go:220] Checking for updates...
	I1018 09:16:21.268703  309439 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:16:21.270038  309439 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-5897/kubeconfig
	I1018 09:16:21.271373  309439 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-5897/.minikube
	I1018 09:16:21.272816  309439 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 09:16:21.274205  309439 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:16:21.275956  309439 config.go:182] Loaded profile config "no-preload-031066": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:16:21.276446  309439 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:16:21.302079  309439 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 09:16:21.302171  309439 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:16:21.363454  309439 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-18 09:16:21.352641655 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:16:21.363573  309439 docker.go:318] overlay module found
	I1018 09:16:21.365496  309439 out.go:179] * Using the docker driver based on existing profile
	I1018 09:16:21.366846  309439 start.go:305] selected driver: docker
	I1018 09:16:21.366860  309439 start.go:925] validating driver "docker" against &{Name:no-preload-031066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-031066 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:16:21.366946  309439 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:16:21.367537  309439 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:16:21.430714  309439 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-18 09:16:21.420288348 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:16:21.431045  309439 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:16:21.431076  309439 cni.go:84] Creating CNI manager for ""
	I1018 09:16:21.431123  309439 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:16:21.431162  309439 start.go:349] cluster config:
	{Name:no-preload-031066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-031066 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:16:21.433306  309439 out.go:179] * Starting "no-preload-031066" primary control-plane node in "no-preload-031066" cluster
	I1018 09:16:21.434506  309439 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 09:16:21.435855  309439 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 09:16:21.437073  309439 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:16:21.437171  309439 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 09:16:21.437215  309439 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/no-preload-031066/config.json ...
	I1018 09:16:21.437382  309439 cache.go:107] acquiring lock: {Name:mka90e9ba087577c518f2d2789ac53b5d3a7e763 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:16:21.437396  309439 cache.go:107] acquiring lock: {Name:mk6fc1dc569bbb33e36e89f8f90205f595f97590 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:16:21.437429  309439 cache.go:107] acquiring lock: {Name:mk862309f449c155bd44d2ad75f71086b6e84154 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:16:21.437488  309439 cache.go:107] acquiring lock: {Name:mkba01dbd7a5ffa26c612bd6d2ecfdfb06fab7f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:16:21.437517  309439 cache.go:115] /home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1018 09:16:21.437376  309439 cache.go:107] acquiring lock: {Name:mkd7da5cca5b2c7f5a7a2978ccb1f907bf4e999d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:16:21.437529  309439 cache.go:115] /home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1018 09:16:21.437531  309439 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 103.396µs
	I1018 09:16:21.437548  309439 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1018 09:16:21.437540  309439 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 155.136µs
	I1018 09:16:21.437551  309439 cache.go:115] /home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1018 09:16:21.437553  309439 cache.go:115] /home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1018 09:16:21.437556  309439 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1018 09:16:21.437519  309439 cache.go:115] /home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1018 09:16:21.437561  309439 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 199.486µs
	I1018 09:16:21.437564  309439 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 84.165µs
	I1018 09:16:21.437573  309439 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1018 09:16:21.437565  309439 cache.go:107] acquiring lock: {Name:mk207c5d06cdfbb02440711f0747e0524648cf15 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:16:21.437611  309439 cache.go:115] /home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1018 09:16:21.437627  309439 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 65.353µs
	I1018 09:16:21.437636  309439 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1018 09:16:21.437575  309439 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1018 09:16:21.437513  309439 cache.go:107] acquiring lock: {Name:mk4deb8933cd428b15e028b41c12d1c1d0a4c5a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:16:21.437573  309439 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 203.037µs
	I1018 09:16:21.437696  309439 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1018 09:16:21.437553  309439 cache.go:107] acquiring lock: {Name:mkeb58e0ef10b1fdccc29a88361956d4cde72da3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:16:21.437730  309439 cache.go:115] /home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1018 09:16:21.437741  309439 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 240.836µs
	I1018 09:16:21.437753  309439 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1018 09:16:21.437671  309439 cache.go:115] /home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1018 09:16:21.437763  309439 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 319.038µs
	I1018 09:16:21.437774  309439 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1018 09:16:21.437787  309439 cache.go:87] Successfully saved all images to host disk.
	I1018 09:16:21.460092  309439 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 09:16:21.460113  309439 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 09:16:21.460128  309439 cache.go:232] Successfully downloaded all kic artifacts
	I1018 09:16:21.460160  309439 start.go:360] acquireMachinesLock for no-preload-031066: {Name:mkf2aade90157f4c0d311140fc5fc0e3e0428507 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:16:21.460220  309439 start.go:364] duration metric: took 39.29µs to acquireMachinesLock for "no-preload-031066"
	I1018 09:16:21.460239  309439 start.go:96] Skipping create...Using existing machine configuration
	I1018 09:16:21.460249  309439 fix.go:54] fixHost starting: 
	I1018 09:16:21.460515  309439 cli_runner.go:164] Run: docker container inspect no-preload-031066 --format={{.State.Status}}
	I1018 09:16:21.479263  309439 fix.go:112] recreateIfNeeded on no-preload-031066: state=Stopped err=<nil>
	W1018 09:16:21.479306  309439 fix.go:138] unexpected machine state, will restart: <nil>
	W1018 09:16:18.612194  295389 node_ready.go:57] node "embed-certs-880603" has "Ready":"False" status (will retry)
	W1018 09:16:21.111155  295389 node_ready.go:57] node "embed-certs-880603" has "Ready":"False" status (will retry)
	W1018 09:16:19.794473  302609 pod_ready.go:104] pod "coredns-5dd5756b68-gwttp" is not "Ready", error: <nil>
	W1018 09:16:22.294004  302609 pod_ready.go:104] pod "coredns-5dd5756b68-gwttp" is not "Ready", error: <nil>
	I1018 09:16:19.783671  307829 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-5897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-986220:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.490229561s)
	I1018 09:16:19.783707  307829 kic.go:203] duration metric: took 4.490410558s to extract preloaded images to volume ...
	W1018 09:16:19.783815  307829 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1018 09:16:19.783854  307829 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1018 09:16:19.783901  307829 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 09:16:19.847832  307829 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-986220 --name default-k8s-diff-port-986220 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-986220 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-986220 --network default-k8s-diff-port-986220 --ip 192.168.94.2 --volume default-k8s-diff-port-986220:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 09:16:20.166578  307829 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-986220 --format={{.State.Running}}
	I1018 09:16:20.186662  307829 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-986220 --format={{.State.Status}}
	I1018 09:16:20.206875  307829 cli_runner.go:164] Run: docker exec default-k8s-diff-port-986220 stat /var/lib/dpkg/alternatives/iptables
	I1018 09:16:20.258252  307829 oci.go:144] the created container "default-k8s-diff-port-986220" has a running status.
	I1018 09:16:20.258285  307829 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21767-5897/.minikube/machines/default-k8s-diff-port-986220/id_rsa...
	I1018 09:16:20.304155  307829 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21767-5897/.minikube/machines/default-k8s-diff-port-986220/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 09:16:20.339663  307829 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-986220 --format={{.State.Status}}
	I1018 09:16:20.359254  307829 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 09:16:20.359276  307829 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-986220 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 09:16:20.402369  307829 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-986220 --format={{.State.Status}}
	I1018 09:16:20.428033  307829 machine.go:93] provisionDockerMachine start ...
	I1018 09:16:20.428144  307829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-986220
	I1018 09:16:20.449570  307829 main.go:141] libmachine: Using SSH client type: native
	I1018 09:16:20.449929  307829 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1018 09:16:20.449948  307829 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:16:20.450769  307829 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36384->127.0.0.1:33108: read: connection reset by peer
	I1018 09:16:23.589648  307829 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-986220
	
	I1018 09:16:23.589683  307829 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-986220"
	I1018 09:16:23.589753  307829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-986220
	I1018 09:16:23.609951  307829 main.go:141] libmachine: Using SSH client type: native
	I1018 09:16:23.610242  307829 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1018 09:16:23.610262  307829 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-986220 && echo "default-k8s-diff-port-986220" | sudo tee /etc/hostname
	I1018 09:16:23.757907  307829 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-986220
	
	I1018 09:16:23.757979  307829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-986220
	I1018 09:16:23.777613  307829 main.go:141] libmachine: Using SSH client type: native
	I1018 09:16:23.777861  307829 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1018 09:16:23.777889  307829 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-986220' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-986220/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-986220' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:16:23.916520  307829 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:16:23.916547  307829 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-5897/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-5897/.minikube}
	I1018 09:16:23.916591  307829 ubuntu.go:190] setting up certificates
	I1018 09:16:23.916606  307829 provision.go:84] configureAuth start
	I1018 09:16:23.916674  307829 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-986220
	I1018 09:16:23.935731  307829 provision.go:143] copyHostCerts
	I1018 09:16:23.935809  307829 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem, removing ...
	I1018 09:16:23.935828  307829 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem
	I1018 09:16:23.935910  307829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem (1078 bytes)
	I1018 09:16:23.936072  307829 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem, removing ...
	I1018 09:16:23.936088  307829 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem
	I1018 09:16:23.936136  307829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem (1123 bytes)
	I1018 09:16:23.936218  307829 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem, removing ...
	I1018 09:16:23.936228  307829 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem
	I1018 09:16:23.936286  307829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem (1675 bytes)
	I1018 09:16:23.936407  307829 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-986220 san=[127.0.0.1 192.168.94.2 default-k8s-diff-port-986220 localhost minikube]
	I1018 09:16:24.096815  307829 provision.go:177] copyRemoteCerts
	I1018 09:16:24.096879  307829 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:16:24.096916  307829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-986220
	I1018 09:16:24.116412  307829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/default-k8s-diff-port-986220/id_rsa Username:docker}
	I1018 09:16:24.215442  307829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:16:24.236994  307829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1018 09:16:24.256007  307829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 09:16:24.275068  307829 provision.go:87] duration metric: took 358.446736ms to configureAuth
	I1018 09:16:24.275096  307829 ubuntu.go:206] setting minikube options for container-runtime
	I1018 09:16:24.275276  307829 config.go:182] Loaded profile config "default-k8s-diff-port-986220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:16:24.275405  307829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-986220
	I1018 09:16:24.295823  307829 main.go:141] libmachine: Using SSH client type: native
	I1018 09:16:24.296078  307829 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1018 09:16:24.296097  307829 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:16:24.553053  307829 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:16:24.553082  307829 machine.go:96] duration metric: took 4.125023459s to provisionDockerMachine
	I1018 09:16:24.553094  307829 client.go:171] duration metric: took 9.862444073s to LocalClient.Create
	I1018 09:16:24.553114  307829 start.go:167] duration metric: took 9.862511631s to libmachine.API.Create "default-k8s-diff-port-986220"
	I1018 09:16:24.553124  307829 start.go:293] postStartSetup for "default-k8s-diff-port-986220" (driver="docker")
	I1018 09:16:24.553138  307829 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:16:24.553242  307829 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:16:24.553291  307829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-986220
	I1018 09:16:24.572128  307829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/default-k8s-diff-port-986220/id_rsa Username:docker}
	I1018 09:16:24.672893  307829 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:16:24.676680  307829 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 09:16:24.676709  307829 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 09:16:24.676719  307829 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-5897/.minikube/addons for local assets ...
	I1018 09:16:24.676777  307829 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-5897/.minikube/files for local assets ...
	I1018 09:16:24.676867  307829 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem -> 93942.pem in /etc/ssl/certs
	I1018 09:16:24.676983  307829 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:16:24.686464  307829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem --> /etc/ssl/certs/93942.pem (1708 bytes)
	I1018 09:16:24.708946  307829 start.go:296] duration metric: took 155.806152ms for postStartSetup
	I1018 09:16:24.709434  307829 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-986220
	I1018 09:16:24.729672  307829 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/config.json ...
	I1018 09:16:24.729981  307829 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:16:24.730033  307829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-986220
	I1018 09:16:24.749138  307829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/default-k8s-diff-port-986220/id_rsa Username:docker}
	I1018 09:16:24.846031  307829 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 09:16:24.851592  307829 start.go:128] duration metric: took 10.163744383s to createHost
	I1018 09:16:24.851619  307829 start.go:83] releasing machines lock for "default-k8s-diff-port-986220", held for 10.163895422s
	I1018 09:16:24.851680  307829 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-986220
	I1018 09:16:24.871446  307829 ssh_runner.go:195] Run: cat /version.json
	I1018 09:16:24.871492  307829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-986220
	I1018 09:16:24.871527  307829 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:16:24.871607  307829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-986220
	I1018 09:16:24.892448  307829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/default-k8s-diff-port-986220/id_rsa Username:docker}
	I1018 09:16:24.892466  307829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/default-k8s-diff-port-986220/id_rsa Username:docker}
	I1018 09:16:25.047556  307829 ssh_runner.go:195] Run: systemctl --version
	I1018 09:16:25.056042  307829 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:16:25.095154  307829 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:16:25.100317  307829 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:16:25.100404  307829 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:16:25.135472  307829 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1018 09:16:25.135500  307829 start.go:495] detecting cgroup driver to use...
	I1018 09:16:25.135533  307829 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 09:16:25.135579  307829 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:16:25.163992  307829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:16:25.179086  307829 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:16:25.179151  307829 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:16:25.197806  307829 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:16:25.218805  307829 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:16:25.310534  307829 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:16:25.402675  307829 docker.go:234] disabling docker service ...
	I1018 09:16:25.402736  307829 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:16:25.424774  307829 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:16:25.439087  307829 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:16:25.533380  307829 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:16:25.620820  307829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:16:25.636909  307829 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:16:25.654401  307829 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 09:16:25.654463  307829 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:16:25.667479  307829 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 09:16:25.667553  307829 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:16:25.678806  307829 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:16:25.692980  307829 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:16:25.704763  307829 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:16:25.715821  307829 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:16:25.727218  307829 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:16:25.742569  307829 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:16:25.752002  307829 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:16:25.760372  307829 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:16:25.768535  307829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:16:25.856991  307829 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:16:25.969026  307829 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:16:25.969096  307829 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:16:25.974137  307829 start.go:563] Will wait 60s for crictl version
	I1018 09:16:25.974200  307829 ssh_runner.go:195] Run: which crictl
	I1018 09:16:25.978663  307829 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 09:16:26.006946  307829 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 09:16:26.007028  307829 ssh_runner.go:195] Run: crio --version
	I1018 09:16:26.037634  307829 ssh_runner.go:195] Run: crio --version
	I1018 09:16:26.069278  307829 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 09:16:21.481637  309439 out.go:252] * Restarting existing docker container for "no-preload-031066" ...
	I1018 09:16:21.481720  309439 cli_runner.go:164] Run: docker start no-preload-031066
	I1018 09:16:21.732544  309439 cli_runner.go:164] Run: docker container inspect no-preload-031066 --format={{.State.Status}}
	I1018 09:16:21.752925  309439 kic.go:430] container "no-preload-031066" state is running.
	I1018 09:16:21.753416  309439 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-031066
	I1018 09:16:21.774132  309439 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/no-preload-031066/config.json ...
	I1018 09:16:21.774479  309439 machine.go:93] provisionDockerMachine start ...
	I1018 09:16:21.774570  309439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-031066
	I1018 09:16:21.795137  309439 main.go:141] libmachine: Using SSH client type: native
	I1018 09:16:21.795458  309439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1018 09:16:21.795477  309439 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:16:21.796069  309439 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52300->127.0.0.1:33113: read: connection reset by peer
	I1018 09:16:24.935395  309439 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-031066
	
	I1018 09:16:24.935424  309439 ubuntu.go:182] provisioning hostname "no-preload-031066"
	I1018 09:16:24.935491  309439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-031066
	I1018 09:16:24.955546  309439 main.go:141] libmachine: Using SSH client type: native
	I1018 09:16:24.955764  309439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1018 09:16:24.955779  309439 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-031066 && echo "no-preload-031066" | sudo tee /etc/hostname
	I1018 09:16:25.103825  309439 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-031066
	
	I1018 09:16:25.103917  309439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-031066
	I1018 09:16:25.127296  309439 main.go:141] libmachine: Using SSH client type: native
	I1018 09:16:25.127611  309439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1018 09:16:25.127652  309439 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-031066' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-031066/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-031066' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:16:25.274198  309439 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:16:25.274224  309439 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-5897/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-5897/.minikube}
	I1018 09:16:25.274267  309439 ubuntu.go:190] setting up certificates
	I1018 09:16:25.274280  309439 provision.go:84] configureAuth start
	I1018 09:16:25.274327  309439 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-031066
	I1018 09:16:25.295152  309439 provision.go:143] copyHostCerts
	I1018 09:16:25.295209  309439 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem, removing ...
	I1018 09:16:25.295222  309439 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem
	I1018 09:16:25.295281  309439 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem (1078 bytes)
	I1018 09:16:25.295411  309439 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem, removing ...
	I1018 09:16:25.295423  309439 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem
	I1018 09:16:25.295448  309439 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem (1123 bytes)
	I1018 09:16:25.295525  309439 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem, removing ...
	I1018 09:16:25.295533  309439 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem
	I1018 09:16:25.295554  309439 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem (1675 bytes)
	I1018 09:16:25.295606  309439 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem org=jenkins.no-preload-031066 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-031066]
	I1018 09:16:25.425118  309439 provision.go:177] copyRemoteCerts
	I1018 09:16:25.425176  309439 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:16:25.425241  309439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-031066
	I1018 09:16:25.445036  309439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/no-preload-031066/id_rsa Username:docker}
	I1018 09:16:25.543837  309439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:16:25.565616  309439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 09:16:25.589029  309439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 09:16:25.608484  309439 provision.go:87] duration metric: took 334.191405ms to configureAuth
	I1018 09:16:25.608516  309439 ubuntu.go:206] setting minikube options for container-runtime
	I1018 09:16:25.608733  309439 config.go:182] Loaded profile config "no-preload-031066": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:16:25.608856  309439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-031066
	I1018 09:16:25.632064  309439 main.go:141] libmachine: Using SSH client type: native
	I1018 09:16:25.632401  309439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1018 09:16:25.632427  309439 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:16:25.957864  309439 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:16:25.957894  309439 machine.go:96] duration metric: took 4.183393935s to provisionDockerMachine
	I1018 09:16:25.957909  309439 start.go:293] postStartSetup for "no-preload-031066" (driver="docker")
	I1018 09:16:25.957922  309439 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:16:25.957977  309439 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:16:25.958020  309439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-031066
	I1018 09:16:25.980314  309439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/no-preload-031066/id_rsa Username:docker}
	I1018 09:16:26.082603  309439 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:16:26.086751  309439 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 09:16:26.086778  309439 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 09:16:26.086789  309439 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-5897/.minikube/addons for local assets ...
	I1018 09:16:26.086848  309439 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-5897/.minikube/files for local assets ...
	I1018 09:16:26.086937  309439 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem -> 93942.pem in /etc/ssl/certs
	I1018 09:16:26.087048  309439 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:16:26.096192  309439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem --> /etc/ssl/certs/93942.pem (1708 bytes)
	I1018 09:16:26.115776  309439 start.go:296] duration metric: took 157.850809ms for postStartSetup
	I1018 09:16:26.115859  309439 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:16:26.115914  309439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-031066
	I1018 09:16:26.137971  309439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/no-preload-031066/id_rsa Username:docker}
	I1018 09:16:26.234585  309439 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 09:16:26.239791  309439 fix.go:56] duration metric: took 4.779536543s for fixHost
	I1018 09:16:26.239820  309439 start.go:83] releasing machines lock for "no-preload-031066", held for 4.779588591s
	I1018 09:16:26.239895  309439 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-031066
	I1018 09:16:26.259555  309439 ssh_runner.go:195] Run: cat /version.json
	W1018 09:16:23.111428  295389 node_ready.go:57] node "embed-certs-880603" has "Ready":"False" status (will retry)
	W1018 09:16:25.112093  295389 node_ready.go:57] node "embed-certs-880603" has "Ready":"False" status (will retry)
	I1018 09:16:26.259669  309439 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:16:26.259627  309439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-031066
	I1018 09:16:26.259792  309439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-031066
	I1018 09:16:26.281760  309439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/no-preload-031066/id_rsa Username:docker}
	I1018 09:16:26.281753  309439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/no-preload-031066/id_rsa Username:docker}
	I1018 09:16:26.435671  309439 ssh_runner.go:195] Run: systemctl --version
	I1018 09:16:26.443263  309439 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:16:26.486908  309439 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:16:26.492101  309439 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:16:26.492171  309439 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:16:26.501157  309439 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 09:16:26.501179  309439 start.go:495] detecting cgroup driver to use...
	I1018 09:16:26.501207  309439 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 09:16:26.501261  309439 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:16:26.517601  309439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:16:26.535073  309439 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:16:26.535137  309439 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:16:26.559014  309439 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:16:26.573192  309439 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:16:26.664628  309439 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:16:26.753369  309439 docker.go:234] disabling docker service ...
	I1018 09:16:26.753441  309439 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:16:26.769930  309439 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:16:26.784250  309439 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:16:26.875825  309439 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:16:26.963494  309439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:16:26.977661  309439 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:16:26.995292  309439 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 09:16:26.995366  309439 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:16:27.005257  309439 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 09:16:27.005335  309439 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:16:27.015687  309439 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:16:27.026502  309439 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:16:27.037104  309439 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:16:27.046231  309439 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:16:27.056592  309439 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:16:27.066210  309439 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:16:27.076520  309439 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:16:27.086299  309439 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:16:27.099809  309439 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:16:27.202216  309439 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:16:27.320390  309439 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:16:27.320456  309439 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:16:27.325144  309439 start.go:563] Will wait 60s for crictl version
	I1018 09:16:27.325213  309439 ssh_runner.go:195] Run: which crictl
	I1018 09:16:27.329944  309439 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 09:16:27.360229  309439 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 09:16:27.360337  309439 ssh_runner.go:195] Run: crio --version
	I1018 09:16:27.392843  309439 ssh_runner.go:195] Run: crio --version
	I1018 09:16:27.430211  309439 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 09:16:26.070774  307829 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-986220 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:16:26.091069  307829 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1018 09:16:26.095294  307829 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:16:26.106817  307829 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-986220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-986220 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:16:26.106953  307829 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:16:26.107001  307829 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:16:26.146050  307829 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:16:26.146071  307829 crio.go:433] Images already preloaded, skipping extraction
	I1018 09:16:26.146117  307829 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:16:26.175872  307829 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:16:26.175899  307829 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:16:26.175908  307829 kubeadm.go:934] updating node { 192.168.94.2 8444 v1.34.1 crio true true} ...
	I1018 09:16:26.176038  307829 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-986220 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-986220 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:16:26.176148  307829 ssh_runner.go:195] Run: crio config
	I1018 09:16:26.227370  307829 cni.go:84] Creating CNI manager for ""
	I1018 09:16:26.227396  307829 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:16:26.227416  307829 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 09:16:26.227445  307829 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-986220 NodeName:default-k8s-diff-port-986220 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:16:26.227594  307829 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-986220"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:16:26.227669  307829 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 09:16:26.236922  307829 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:16:26.236985  307829 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:16:26.246249  307829 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1018 09:16:26.261061  307829 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:16:26.281576  307829 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1018 09:16:26.296929  307829 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1018 09:16:26.300975  307829 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:16:26.313470  307829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:16:26.402470  307829 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:16:26.432058  307829 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220 for IP: 192.168.94.2
	I1018 09:16:26.432089  307829 certs.go:195] generating shared ca certs ...
	I1018 09:16:26.432109  307829 certs.go:227] acquiring lock for ca certs: {Name:mk550b60d986fbbdf7b5e0015c56234b739f3162 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:16:26.432273  307829 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-5897/.minikube/ca.key
	I1018 09:16:26.432354  307829 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.key
	I1018 09:16:26.432374  307829 certs.go:257] generating profile certs ...
	I1018 09:16:26.432456  307829 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/client.key
	I1018 09:16:26.432479  307829 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/client.crt with IP's: []
	I1018 09:16:26.858948  307829 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/client.crt ...
	I1018 09:16:26.858974  307829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/client.crt: {Name:mk51c8869bcfadfee754b4430b46c6f8826cd48e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:16:26.859138  307829 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/client.key ...
	I1018 09:16:26.859151  307829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/client.key: {Name:mk25866fa200b9b02b356bf6c37bf61a8173ffbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:16:26.859263  307829 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/apiserver.key.6dd2aec8
	I1018 09:16:26.859285  307829 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/apiserver.crt.6dd2aec8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1018 09:16:27.395262  307829 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/apiserver.crt.6dd2aec8 ...
	I1018 09:16:27.395288  307829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/apiserver.crt.6dd2aec8: {Name:mk6e21b854f39a72826bd85be5ec5fc298b199fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:16:27.395475  307829 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/apiserver.key.6dd2aec8 ...
	I1018 09:16:27.395491  307829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/apiserver.key.6dd2aec8: {Name:mk0894105faa3c087ffd9c9fdc31379b6526b690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:16:27.395577  307829 certs.go:382] copying /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/apiserver.crt.6dd2aec8 -> /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/apiserver.crt
	I1018 09:16:27.395651  307829 certs.go:386] copying /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/apiserver.key.6dd2aec8 -> /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/apiserver.key
	I1018 09:16:27.395705  307829 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/proxy-client.key
	I1018 09:16:27.395722  307829 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/proxy-client.crt with IP's: []
	I1018 09:16:27.602598  307829 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/proxy-client.crt ...
	I1018 09:16:27.602624  307829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/proxy-client.crt: {Name:mk8306903932dd1bb11b8ea9409214667367047c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:16:27.602816  307829 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/proxy-client.key ...
	I1018 09:16:27.602835  307829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/proxy-client.key: {Name:mkd19044a8fb32eff2e080ea7a1555b5849cc3b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:16:27.603059  307829 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/9394.pem (1338 bytes)
	W1018 09:16:27.603102  307829 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-5897/.minikube/certs/9394_empty.pem, impossibly tiny 0 bytes
	I1018 09:16:27.603119  307829 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 09:16:27.603157  307829 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:16:27.603187  307829 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:16:27.603220  307829 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem (1675 bytes)
	I1018 09:16:27.603272  307829 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem (1708 bytes)
	I1018 09:16:27.603874  307829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:16:27.624224  307829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 09:16:27.643202  307829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:16:27.667310  307829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 09:16:27.687198  307829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1018 09:16:27.706826  307829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 09:16:27.726322  307829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:16:27.745966  307829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 09:16:27.766635  307829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:16:27.789853  307829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/certs/9394.pem --> /usr/share/ca-certificates/9394.pem (1338 bytes)
	I1018 09:16:27.812727  307829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem --> /usr/share/ca-certificates/93942.pem (1708 bytes)
	I1018 09:16:27.839899  307829 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:16:27.854585  307829 ssh_runner.go:195] Run: openssl version
	I1018 09:16:27.862043  307829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:16:27.871459  307829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:16:27.875984  307829 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:16:27.876057  307829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:16:27.912565  307829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:16:27.923183  307829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9394.pem && ln -fs /usr/share/ca-certificates/9394.pem /etc/ssl/certs/9394.pem"
	I1018 09:16:27.932933  307829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9394.pem
	I1018 09:16:27.937886  307829 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 08:35 /usr/share/ca-certificates/9394.pem
	I1018 09:16:27.937949  307829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9394.pem
	I1018 09:16:27.977776  307829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9394.pem /etc/ssl/certs/51391683.0"
	I1018 09:16:27.988396  307829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93942.pem && ln -fs /usr/share/ca-certificates/93942.pem /etc/ssl/certs/93942.pem"
	I1018 09:16:27.997972  307829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93942.pem
	I1018 09:16:28.002236  307829 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 08:35 /usr/share/ca-certificates/93942.pem
	I1018 09:16:28.002294  307829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93942.pem
	I1018 09:16:28.040608  307829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93942.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:16:28.051449  307829 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:16:28.055923  307829 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 09:16:28.055980  307829 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-986220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-986220 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:16:28.056051  307829 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:16:28.056119  307829 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:16:28.089132  307829 cri.go:89] found id: ""
	I1018 09:16:28.089192  307829 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:16:28.099177  307829 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 09:16:28.109267  307829 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 09:16:28.109329  307829 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 09:16:28.120642  307829 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 09:16:28.120666  307829 kubeadm.go:157] found existing configuration files:
	
	I1018 09:16:28.120718  307829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1018 09:16:28.131668  307829 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 09:16:28.131734  307829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 09:16:28.142231  307829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1018 09:16:28.155016  307829 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 09:16:28.155078  307829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 09:16:28.166186  307829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1018 09:16:28.177468  307829 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 09:16:28.177540  307829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 09:16:28.189027  307829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1018 09:16:28.199965  307829 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 09:16:28.200051  307829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 09:16:28.209045  307829 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 09:16:28.261581  307829 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 09:16:28.261670  307829 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 09:16:28.299228  307829 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 09:16:28.299358  307829 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1018 09:16:28.299410  307829 kubeadm.go:318] OS: Linux
	I1018 09:16:28.299478  307829 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 09:16:28.299612  307829 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 09:16:28.299657  307829 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 09:16:28.299700  307829 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 09:16:28.299742  307829 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 09:16:28.299787  307829 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 09:16:28.299829  307829 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 09:16:28.299868  307829 kubeadm.go:318] CGROUPS_IO: enabled
	I1018 09:16:28.395707  307829 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 09:16:28.395841  307829 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 09:16:28.395964  307829 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 09:16:28.413235  307829 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1018 09:16:24.295019  302609 pod_ready.go:104] pod "coredns-5dd5756b68-gwttp" is not "Ready", error: <nil>
	W1018 09:16:26.793721  302609 pod_ready.go:104] pod "coredns-5dd5756b68-gwttp" is not "Ready", error: <nil>
	I1018 09:16:27.431467  309439 cli_runner.go:164] Run: docker network inspect no-preload-031066 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:16:27.452092  309439 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1018 09:16:27.456746  309439 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:16:27.467844  309439 kubeadm.go:883] updating cluster {Name:no-preload-031066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-031066 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:16:27.467966  309439 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:16:27.468011  309439 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:16:27.503028  309439 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:16:27.503054  309439 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:16:27.503062  309439 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1018 09:16:27.503150  309439 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-031066 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-031066 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:16:27.503211  309439 ssh_runner.go:195] Run: crio config
	I1018 09:16:27.551972  309439 cni.go:84] Creating CNI manager for ""
	I1018 09:16:27.552003  309439 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:16:27.552027  309439 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 09:16:27.552059  309439 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-031066 NodeName:no-preload-031066 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:16:27.552228  309439 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-031066"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:16:27.552303  309439 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 09:16:27.561492  309439 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:16:27.561566  309439 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:16:27.570137  309439 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1018 09:16:27.584540  309439 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:16:27.598223  309439 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1018 09:16:27.612505  309439 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1018 09:16:27.616378  309439 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:16:27.628297  309439 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:16:27.719096  309439 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:16:27.742152  309439 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/no-preload-031066 for IP: 192.168.85.2
	I1018 09:16:27.742182  309439 certs.go:195] generating shared ca certs ...
	I1018 09:16:27.742204  309439 certs.go:227] acquiring lock for ca certs: {Name:mk550b60d986fbbdf7b5e0015c56234b739f3162 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:16:27.742412  309439 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-5897/.minikube/ca.key
	I1018 09:16:27.742502  309439 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.key
	I1018 09:16:27.742521  309439 certs.go:257] generating profile certs ...
	I1018 09:16:27.742635  309439 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/no-preload-031066/client.key
	I1018 09:16:27.742703  309439 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/no-preload-031066/apiserver.key.5b17cd89
	I1018 09:16:27.742770  309439 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/no-preload-031066/proxy-client.key
	I1018 09:16:27.742919  309439 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/9394.pem (1338 bytes)
	W1018 09:16:27.742965  309439 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-5897/.minikube/certs/9394_empty.pem, impossibly tiny 0 bytes
	I1018 09:16:27.742982  309439 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 09:16:27.743018  309439 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:16:27.743053  309439 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:16:27.743084  309439 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem (1675 bytes)
	I1018 09:16:27.743146  309439 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem (1708 bytes)
	I1018 09:16:27.744065  309439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:16:27.766446  309439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 09:16:27.789662  309439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:16:27.810502  309439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 09:16:27.837044  309439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/no-preload-031066/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1018 09:16:27.858195  309439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/no-preload-031066/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 09:16:27.878029  309439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/no-preload-031066/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:16:27.898104  309439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/no-preload-031066/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 09:16:27.918370  309439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:16:27.938784  309439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/certs/9394.pem --> /usr/share/ca-certificates/9394.pem (1338 bytes)
	I1018 09:16:27.959384  309439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem --> /usr/share/ca-certificates/93942.pem (1708 bytes)
	I1018 09:16:27.979401  309439 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:16:27.994769  309439 ssh_runner.go:195] Run: openssl version
	I1018 09:16:28.001920  309439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93942.pem && ln -fs /usr/share/ca-certificates/93942.pem /etc/ssl/certs/93942.pem"
	I1018 09:16:28.011616  309439 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93942.pem
	I1018 09:16:28.015846  309439 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 08:35 /usr/share/ca-certificates/93942.pem
	I1018 09:16:28.015902  309439 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93942.pem
	I1018 09:16:28.055574  309439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93942.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:16:28.064720  309439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:16:28.074630  309439 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:16:28.079603  309439 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:16:28.079670  309439 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:16:28.127275  309439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:16:28.140151  309439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9394.pem && ln -fs /usr/share/ca-certificates/9394.pem /etc/ssl/certs/9394.pem"
	I1018 09:16:28.152584  309439 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9394.pem
	I1018 09:16:28.158002  309439 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 08:35 /usr/share/ca-certificates/9394.pem
	I1018 09:16:28.158067  309439 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9394.pem
	I1018 09:16:28.211197  309439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9394.pem /etc/ssl/certs/51391683.0"
	I1018 09:16:28.220083  309439 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:16:28.224791  309439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 09:16:28.278506  309439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 09:16:28.328610  309439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 09:16:28.392663  309439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 09:16:28.455567  309439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 09:16:28.519223  309439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 09:16:28.580698  309439 kubeadm.go:400] StartCluster: {Name:no-preload-031066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-031066 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:16:28.580833  309439 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:16:28.580901  309439 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:16:28.624088  309439 cri.go:89] found id: "153dd41ff60f495d247d4bd42054dd9255c2fe5ccbc173f31021152a50b30308"
	I1018 09:16:28.624113  309439 cri.go:89] found id: "b51a1224ef6b876bc35ce20f2366f94525e300e4432dff8348abbde915ade5af"
	I1018 09:16:28.624118  309439 cri.go:89] found id: "62682de07bbfeb0d0f0c6405121566236410d571314651f369c15f65938b548a"
	I1018 09:16:28.624125  309439 cri.go:89] found id: "db536597b2746191742cfa1b8df28f2fe3935b9a553d5543f993db2773c9f6a1"
	I1018 09:16:28.624129  309439 cri.go:89] found id: ""
	I1018 09:16:28.624177  309439 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 09:16:28.642550  309439 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:16:28Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:16:28.642622  309439 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:16:28.658418  309439 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 09:16:28.658440  309439 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 09:16:28.658714  309439 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 09:16:28.670518  309439 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 09:16:28.671730  309439 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-031066" does not appear in /home/jenkins/minikube-integration/21767-5897/kubeconfig
	I1018 09:16:28.672554  309439 kubeconfig.go:62] /home/jenkins/minikube-integration/21767-5897/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-031066" cluster setting kubeconfig missing "no-preload-031066" context setting]
	I1018 09:16:28.673952  309439 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/kubeconfig: {Name:mkbdf22e0f6c3f9f36e4cde3352b620d43ace448 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:16:28.676681  309439 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 09:16:28.689778  309439 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1018 09:16:28.689902  309439 kubeadm.go:601] duration metric: took 31.455758ms to restartPrimaryControlPlane
	I1018 09:16:28.689919  309439 kubeadm.go:402] duration metric: took 109.246641ms to StartCluster
	I1018 09:16:28.689940  309439 settings.go:142] acquiring lock: {Name:mk177870d6cf7000f95346d8b9c104ade730278a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:16:28.690009  309439 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-5897/kubeconfig
	I1018 09:16:28.692230  309439 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/kubeconfig: {Name:mkbdf22e0f6c3f9f36e4cde3352b620d43ace448 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:16:28.692547  309439 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:16:28.692792  309439 config.go:182] Loaded profile config "no-preload-031066": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:16:28.692794  309439 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 09:16:28.692955  309439 addons.go:69] Setting storage-provisioner=true in profile "no-preload-031066"
	I1018 09:16:28.692978  309439 addons.go:238] Setting addon storage-provisioner=true in "no-preload-031066"
	I1018 09:16:28.692975  309439 addons.go:69] Setting dashboard=true in profile "no-preload-031066"
	I1018 09:16:28.692996  309439 addons.go:69] Setting default-storageclass=true in profile "no-preload-031066"
	I1018 09:16:28.693007  309439 addons.go:238] Setting addon dashboard=true in "no-preload-031066"
	I1018 09:16:28.693015  309439 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-031066"
	W1018 09:16:28.693018  309439 addons.go:247] addon dashboard should already be in state true
	I1018 09:16:28.693055  309439 host.go:66] Checking if "no-preload-031066" exists ...
	I1018 09:16:28.693384  309439 cli_runner.go:164] Run: docker container inspect no-preload-031066 --format={{.State.Status}}
	W1018 09:16:28.692987  309439 addons.go:247] addon storage-provisioner should already be in state true
	I1018 09:16:28.693557  309439 host.go:66] Checking if "no-preload-031066" exists ...
	I1018 09:16:28.693612  309439 cli_runner.go:164] Run: docker container inspect no-preload-031066 --format={{.State.Status}}
	I1018 09:16:28.694012  309439 cli_runner.go:164] Run: docker container inspect no-preload-031066 --format={{.State.Status}}
	I1018 09:16:28.694776  309439 out.go:179] * Verifying Kubernetes components...
	I1018 09:16:28.696220  309439 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:16:28.724766  309439 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1018 09:16:28.726175  309439 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1018 09:16:28.727367  309439 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1018 09:16:28.727390  309439 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1018 09:16:28.727455  309439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-031066
	I1018 09:16:28.728591  309439 addons.go:238] Setting addon default-storageclass=true in "no-preload-031066"
	W1018 09:16:28.728613  309439 addons.go:247] addon default-storageclass should already be in state true
	I1018 09:16:28.728642  309439 host.go:66] Checking if "no-preload-031066" exists ...
	I1018 09:16:28.729157  309439 cli_runner.go:164] Run: docker container inspect no-preload-031066 --format={{.State.Status}}
	I1018 09:16:28.730548  309439 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:16:28.416330  307829 out.go:252]   - Generating certificates and keys ...
	I1018 09:16:28.416469  307829 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 09:16:28.416585  307829 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 09:16:28.961544  307829 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 09:16:29.130817  307829 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 09:16:28.733826  309439 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:16:28.733946  309439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 09:16:28.734145  309439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-031066
	I1018 09:16:28.758429  309439 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 09:16:28.758462  309439 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 09:16:28.758530  309439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-031066
	I1018 09:16:28.765380  309439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/no-preload-031066/id_rsa Username:docker}
	I1018 09:16:28.782447  309439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/no-preload-031066/id_rsa Username:docker}
	I1018 09:16:28.799019  309439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/no-preload-031066/id_rsa Username:docker}
	I1018 09:16:28.912215  309439 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:16:28.934701  309439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:16:28.935357  309439 node_ready.go:35] waiting up to 6m0s for node "no-preload-031066" to be "Ready" ...
	I1018 09:16:28.970747  309439 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1018 09:16:28.970915  309439 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1018 09:16:28.972898  309439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 09:16:29.005063  309439 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1018 09:16:29.005087  309439 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1018 09:16:29.060862  309439 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1018 09:16:29.060897  309439 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1018 09:16:29.081097  309439 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1018 09:16:29.081122  309439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1018 09:16:29.100999  309439 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1018 09:16:29.101045  309439 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1018 09:16:29.120688  309439 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1018 09:16:29.120720  309439 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1018 09:16:29.139590  309439 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1018 09:16:29.139620  309439 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1018 09:16:29.157828  309439 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1018 09:16:29.157857  309439 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1018 09:16:29.177540  309439 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 09:16:29.177566  309439 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1018 09:16:29.198120  309439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 09:16:30.672748  309439 node_ready.go:49] node "no-preload-031066" is "Ready"
	I1018 09:16:30.672787  309439 node_ready.go:38] duration metric: took 1.737388567s for node "no-preload-031066" to be "Ready" ...
	I1018 09:16:30.672804  309439 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:16:30.672858  309439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:16:31.507012  309439 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.572254383s)
	I1018 09:16:31.507099  309439 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.534180069s)
	I1018 09:16:31.507726  309439 api_server.go:72] duration metric: took 2.815152526s to wait for apiserver process to appear ...
	I1018 09:16:31.507747  309439 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:16:31.507767  309439 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:16:31.508258  309439 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.309493299s)
	I1018 09:16:31.510608  309439 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-031066 addons enable metrics-server
	
	I1018 09:16:31.515473  309439 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 09:16:31.515509  309439 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 09:16:31.521256  309439 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1018 09:16:27.113789  295389 node_ready.go:57] node "embed-certs-880603" has "Ready":"False" status (will retry)
	W1018 09:16:29.611877  295389 node_ready.go:57] node "embed-certs-880603" has "Ready":"False" status (will retry)
	W1018 09:16:28.804271  302609 pod_ready.go:104] pod "coredns-5dd5756b68-gwttp" is not "Ready", error: <nil>
	W1018 09:16:31.298052  302609 pod_ready.go:104] pod "coredns-5dd5756b68-gwttp" is not "Ready", error: <nil>
	I1018 09:16:29.615880  307829 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 09:16:29.662294  307829 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 09:16:30.234104  307829 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 09:16:30.234392  307829 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-986220 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1018 09:16:30.435950  307829 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 09:16:30.436322  307829 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-986220 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1018 09:16:30.721773  307829 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 09:16:31.077742  307829 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 09:16:31.728841  307829 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 09:16:31.729054  307829 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 09:16:32.282669  307829 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 09:16:32.757782  307829 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 09:16:33.241823  307829 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 09:16:33.509889  307829 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 09:16:33.955012  307829 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 09:16:33.955761  307829 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 09:16:33.959972  307829 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 09:16:33.961487  307829 out.go:252]   - Booting up control plane ...
	I1018 09:16:33.961586  307829 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 09:16:33.961682  307829 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 09:16:33.962289  307829 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 09:16:33.978521  307829 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 09:16:33.979073  307829 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 09:16:33.987745  307829 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 09:16:33.988059  307829 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 09:16:33.988143  307829 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 09:16:34.106714  307829 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 09:16:34.106869  307829 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 09:16:31.522773  309439 addons.go:514] duration metric: took 2.829985828s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1018 09:16:32.008497  309439 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:16:32.013652  309439 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1018 09:16:32.014899  309439 api_server.go:141] control plane version: v1.34.1
	I1018 09:16:32.014936  309439 api_server.go:131] duration metric: took 507.174967ms to wait for apiserver health ...
	I1018 09:16:32.014946  309439 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:16:32.018941  309439 system_pods.go:59] 8 kube-system pods found
	I1018 09:16:32.018978  309439 system_pods.go:61] "coredns-66bc5c9577-h44wj" [0f9ac8bf-4d8f-489f-a5bb-f8ef2d832a89] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:16:32.018993  309439 system_pods.go:61] "etcd-no-preload-031066" [46ee9eac-4087-442e-855b-50a8b65b06df] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 09:16:32.019001  309439 system_pods.go:61] "kindnet-k7m9t" [08c34b72-06a7-4a73-b703-ce61dbf3a37f] Running
	I1018 09:16:32.019011  309439 system_pods.go:61] "kube-apiserver-no-preload-031066" [7b20717e-d3b8-4f72-9c92-04c74b236964] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 09:16:32.019019  309439 system_pods.go:61] "kube-controller-manager-no-preload-031066" [e8145322-b25f-40ec-aa8f-39b64900226c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 09:16:32.019025  309439 system_pods.go:61] "kube-proxy-jr5qn" [1ae92f3f-9c07-4fb0-8334-549bfd4cac76] Running
	I1018 09:16:32.019033  309439 system_pods.go:61] "kube-scheduler-no-preload-031066" [2d6fcc42-b0a0-46d3-8eb1-7408eadd4dc6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 09:16:32.019047  309439 system_pods.go:61] "storage-provisioner" [5b3e8950-c8a2-4205-b3aa-5c48157fc9d1] Running
	I1018 09:16:32.019057  309439 system_pods.go:74] duration metric: took 4.103211ms to wait for pod list to return data ...
	I1018 09:16:32.019071  309439 default_sa.go:34] waiting for default service account to be created ...
	I1018 09:16:32.021957  309439 default_sa.go:45] found service account: "default"
	I1018 09:16:32.021981  309439 default_sa.go:55] duration metric: took 2.904005ms for default service account to be created ...
	I1018 09:16:32.021993  309439 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 09:16:32.025565  309439 system_pods.go:86] 8 kube-system pods found
	I1018 09:16:32.025597  309439 system_pods.go:89] "coredns-66bc5c9577-h44wj" [0f9ac8bf-4d8f-489f-a5bb-f8ef2d832a89] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:16:32.025607  309439 system_pods.go:89] "etcd-no-preload-031066" [46ee9eac-4087-442e-855b-50a8b65b06df] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 09:16:32.025615  309439 system_pods.go:89] "kindnet-k7m9t" [08c34b72-06a7-4a73-b703-ce61dbf3a37f] Running
	I1018 09:16:32.025624  309439 system_pods.go:89] "kube-apiserver-no-preload-031066" [7b20717e-d3b8-4f72-9c92-04c74b236964] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 09:16:32.025633  309439 system_pods.go:89] "kube-controller-manager-no-preload-031066" [e8145322-b25f-40ec-aa8f-39b64900226c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 09:16:32.025650  309439 system_pods.go:89] "kube-proxy-jr5qn" [1ae92f3f-9c07-4fb0-8334-549bfd4cac76] Running
	I1018 09:16:32.025660  309439 system_pods.go:89] "kube-scheduler-no-preload-031066" [2d6fcc42-b0a0-46d3-8eb1-7408eadd4dc6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 09:16:32.025674  309439 system_pods.go:89] "storage-provisioner" [5b3e8950-c8a2-4205-b3aa-5c48157fc9d1] Running
	I1018 09:16:32.025689  309439 system_pods.go:126] duration metric: took 3.68704ms to wait for k8s-apps to be running ...
	I1018 09:16:32.025702  309439 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 09:16:32.025758  309439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:16:32.043072  309439 system_svc.go:56] duration metric: took 17.35931ms WaitForService to wait for kubelet
	I1018 09:16:32.043109  309439 kubeadm.go:586] duration metric: took 3.350536737s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:16:32.043130  309439 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:16:32.047292  309439 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 09:16:32.047324  309439 node_conditions.go:123] node cpu capacity is 8
	I1018 09:16:32.047338  309439 node_conditions.go:105] duration metric: took 4.202003ms to run NodePressure ...
	I1018 09:16:32.047376  309439 start.go:241] waiting for startup goroutines ...
	I1018 09:16:32.047386  309439 start.go:246] waiting for cluster config update ...
	I1018 09:16:32.047405  309439 start.go:255] writing updated cluster config ...
	I1018 09:16:32.047889  309439 ssh_runner.go:195] Run: rm -f paused
	I1018 09:16:32.053052  309439 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:16:32.058191  309439 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-h44wj" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 09:16:34.063715  309439 pod_ready.go:104] pod "coredns-66bc5c9577-h44wj" is not "Ready", error: <nil>
	W1018 09:16:36.065474  309439 pod_ready.go:104] pod "coredns-66bc5c9577-h44wj" is not "Ready", error: <nil>
	W1018 09:16:32.111388  295389 node_ready.go:57] node "embed-certs-880603" has "Ready":"False" status (will retry)
	W1018 09:16:34.111683  295389 node_ready.go:57] node "embed-certs-880603" has "Ready":"False" status (will retry)
	W1018 09:16:36.112976  295389 node_ready.go:57] node "embed-certs-880603" has "Ready":"False" status (will retry)
	W1018 09:16:33.795029  302609 pod_ready.go:104] pod "coredns-5dd5756b68-gwttp" is not "Ready", error: <nil>
	W1018 09:16:36.295172  302609 pod_ready.go:104] pod "coredns-5dd5756b68-gwttp" is not "Ready", error: <nil>
	W1018 09:16:38.295449  302609 pod_ready.go:104] pod "coredns-5dd5756b68-gwttp" is not "Ready", error: <nil>
	I1018 09:16:35.108493  307829 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.002011819s
	I1018 09:16:35.113205  307829 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 09:16:35.113369  307829 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8444/livez
	I1018 09:16:35.113508  307829 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 09:16:35.113618  307829 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 09:16:37.315142  307829 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.201785879s
	I1018 09:16:38.892629  307829 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.779357908s
	W1018 09:16:38.565895  309439 pod_ready.go:104] pod "coredns-66bc5c9577-h44wj" is not "Ready", error: <nil>
	W1018 09:16:41.065990  309439 pod_ready.go:104] pod "coredns-66bc5c9577-h44wj" is not "Ready", error: <nil>
	I1018 09:16:41.117083  307829 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.003684215s
	I1018 09:16:41.169314  307829 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 09:16:41.210944  307829 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 09:16:41.252804  307829 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 09:16:41.253079  307829 kubeadm.go:318] [mark-control-plane] Marking the node default-k8s-diff-port-986220 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 09:16:41.306958  307829 kubeadm.go:318] [bootstrap-token] Using token: f3p04i.qkc1arqowwwf8733
	W1018 09:16:38.611800  295389 node_ready.go:57] node "embed-certs-880603" has "Ready":"False" status (will retry)
	W1018 09:16:40.612030  295389 node_ready.go:57] node "embed-certs-880603" has "Ready":"False" status (will retry)
	I1018 09:16:41.611642  295389 node_ready.go:49] node "embed-certs-880603" is "Ready"
	I1018 09:16:41.611675  295389 node_ready.go:38] duration metric: took 41.503667581s for node "embed-certs-880603" to be "Ready" ...
	I1018 09:16:41.611697  295389 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:16:41.611765  295389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:16:41.627995  295389 api_server.go:72] duration metric: took 41.84915441s to wait for apiserver process to appear ...
	I1018 09:16:41.628025  295389 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:16:41.628048  295389 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 09:16:41.633763  295389 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1018 09:16:41.635685  295389 api_server.go:141] control plane version: v1.34.1
	I1018 09:16:41.635717  295389 api_server.go:131] duration metric: took 7.685454ms to wait for apiserver health ...
	I1018 09:16:41.635728  295389 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:16:41.640645  295389 system_pods.go:59] 8 kube-system pods found
	I1018 09:16:41.640688  295389 system_pods.go:61] "coredns-66bc5c9577-7fnw7" [04bb2d33-29f9-45e9-a6b1-e2b770651c0f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:16:41.640697  295389 system_pods.go:61] "etcd-embed-certs-880603" [da7643b6-9066-4e2f-99eb-c2e6d085f539] Running
	I1018 09:16:41.640707  295389 system_pods.go:61] "kindnet-wzdm5" [20629c75-ca93-46db-875e-49d67c7b3f06] Running
	I1018 09:16:41.640713  295389 system_pods.go:61] "kube-apiserver-embed-certs-880603" [1e4ec0ef-dc43-4939-a733-02690b04d19b] Running
	I1018 09:16:41.640717  295389 system_pods.go:61] "kube-controller-manager-embed-certs-880603" [ccc2c9f0-0b2c-46e5-bea8-c82e8e3124ec] Running
	I1018 09:16:41.640720  295389 system_pods.go:61] "kube-proxy-k4kcs" [83d1821f-468a-4bf0-8fc0-e40e0668f6ff] Running
	I1018 09:16:41.640724  295389 system_pods.go:61] "kube-scheduler-embed-certs-880603" [5635b7e1-dca9-4a7e-8b9c-aa96067fd707] Running
	I1018 09:16:41.640728  295389 system_pods.go:61] "storage-provisioner" [d2aa7a09-3332-4744-9180-d307b4fc8194] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:16:41.640734  295389 system_pods.go:74] duration metric: took 5.000069ms to wait for pod list to return data ...
	I1018 09:16:41.640743  295389 default_sa.go:34] waiting for default service account to be created ...
	I1018 09:16:41.644176  295389 default_sa.go:45] found service account: "default"
	I1018 09:16:41.644203  295389 default_sa.go:55] duration metric: took 3.451989ms for default service account to be created ...
	I1018 09:16:41.644216  295389 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 09:16:41.648178  295389 system_pods.go:86] 8 kube-system pods found
	I1018 09:16:41.648208  295389 system_pods.go:89] "coredns-66bc5c9577-7fnw7" [04bb2d33-29f9-45e9-a6b1-e2b770651c0f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:16:41.648214  295389 system_pods.go:89] "etcd-embed-certs-880603" [da7643b6-9066-4e2f-99eb-c2e6d085f539] Running
	I1018 09:16:41.648220  295389 system_pods.go:89] "kindnet-wzdm5" [20629c75-ca93-46db-875e-49d67c7b3f06] Running
	I1018 09:16:41.648223  295389 system_pods.go:89] "kube-apiserver-embed-certs-880603" [1e4ec0ef-dc43-4939-a733-02690b04d19b] Running
	I1018 09:16:41.648228  295389 system_pods.go:89] "kube-controller-manager-embed-certs-880603" [ccc2c9f0-0b2c-46e5-bea8-c82e8e3124ec] Running
	I1018 09:16:41.648231  295389 system_pods.go:89] "kube-proxy-k4kcs" [83d1821f-468a-4bf0-8fc0-e40e0668f6ff] Running
	I1018 09:16:41.648235  295389 system_pods.go:89] "kube-scheduler-embed-certs-880603" [5635b7e1-dca9-4a7e-8b9c-aa96067fd707] Running
	I1018 09:16:41.648239  295389 system_pods.go:89] "storage-provisioner" [d2aa7a09-3332-4744-9180-d307b4fc8194] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:16:41.648267  295389 retry.go:31] will retry after 192.969575ms: missing components: kube-dns
	I1018 09:16:41.847560  295389 system_pods.go:86] 8 kube-system pods found
	I1018 09:16:41.847602  295389 system_pods.go:89] "coredns-66bc5c9577-7fnw7" [04bb2d33-29f9-45e9-a6b1-e2b770651c0f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:16:41.847612  295389 system_pods.go:89] "etcd-embed-certs-880603" [da7643b6-9066-4e2f-99eb-c2e6d085f539] Running
	I1018 09:16:41.847620  295389 system_pods.go:89] "kindnet-wzdm5" [20629c75-ca93-46db-875e-49d67c7b3f06] Running
	I1018 09:16:41.847626  295389 system_pods.go:89] "kube-apiserver-embed-certs-880603" [1e4ec0ef-dc43-4939-a733-02690b04d19b] Running
	I1018 09:16:41.847633  295389 system_pods.go:89] "kube-controller-manager-embed-certs-880603" [ccc2c9f0-0b2c-46e5-bea8-c82e8e3124ec] Running
	I1018 09:16:41.847637  295389 system_pods.go:89] "kube-proxy-k4kcs" [83d1821f-468a-4bf0-8fc0-e40e0668f6ff] Running
	I1018 09:16:41.847642  295389 system_pods.go:89] "kube-scheduler-embed-certs-880603" [5635b7e1-dca9-4a7e-8b9c-aa96067fd707] Running
	I1018 09:16:41.847646  295389 system_pods.go:89] "storage-provisioner" [d2aa7a09-3332-4744-9180-d307b4fc8194] Running
	I1018 09:16:41.847658  295389 system_pods.go:126] duration metric: took 203.434861ms to wait for k8s-apps to be running ...
	I1018 09:16:41.847708  295389 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 09:16:41.847760  295389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:16:41.864768  295389 system_svc.go:56] duration metric: took 17.051428ms WaitForService to wait for kubelet
	I1018 09:16:41.864801  295389 kubeadm.go:586] duration metric: took 42.085966942s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:16:41.864822  295389 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:16:41.868754  295389 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 09:16:41.868786  295389 node_conditions.go:123] node cpu capacity is 8
	I1018 09:16:41.868806  295389 node_conditions.go:105] duration metric: took 3.977808ms to run NodePressure ...
	I1018 09:16:41.868820  295389 start.go:241] waiting for startup goroutines ...
	I1018 09:16:41.868838  295389 start.go:246] waiting for cluster config update ...
	I1018 09:16:41.868852  295389 start.go:255] writing updated cluster config ...
	I1018 09:16:41.869184  295389 ssh_runner.go:195] Run: rm -f paused
	I1018 09:16:41.873479  295389 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:16:41.877518  295389 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7fnw7" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:41.882231  295389 pod_ready.go:94] pod "coredns-66bc5c9577-7fnw7" is "Ready"
	I1018 09:16:41.882258  295389 pod_ready.go:86] duration metric: took 4.717941ms for pod "coredns-66bc5c9577-7fnw7" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:41.884325  295389 pod_ready.go:83] waiting for pod "etcd-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:41.888331  295389 pod_ready.go:94] pod "etcd-embed-certs-880603" is "Ready"
	I1018 09:16:41.888374  295389 pod_ready.go:86] duration metric: took 3.985545ms for pod "etcd-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:41.890515  295389 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:41.894262  295389 pod_ready.go:94] pod "kube-apiserver-embed-certs-880603" is "Ready"
	I1018 09:16:41.894287  295389 pod_ready.go:86] duration metric: took 3.751424ms for pod "kube-apiserver-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:41.896263  295389 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:41.323567  307829 out.go:252]   - Configuring RBAC rules ...
	I1018 09:16:41.323741  307829 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 09:16:41.323891  307829 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 09:16:41.401828  307829 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 09:16:41.461336  307829 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 09:16:41.465137  307829 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 09:16:41.469707  307829 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 09:16:41.525899  307829 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 09:16:41.942881  307829 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 09:16:42.524571  307829 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 09:16:42.525447  307829 kubeadm.go:318] 
	I1018 09:16:42.525556  307829 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 09:16:42.525568  307829 kubeadm.go:318] 
	I1018 09:16:42.525684  307829 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 09:16:42.525705  307829 kubeadm.go:318] 
	I1018 09:16:42.525741  307829 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 09:16:42.525845  307829 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 09:16:42.525926  307829 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 09:16:42.525935  307829 kubeadm.go:318] 
	I1018 09:16:42.526007  307829 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 09:16:42.526017  307829 kubeadm.go:318] 
	I1018 09:16:42.526086  307829 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 09:16:42.526095  307829 kubeadm.go:318] 
	I1018 09:16:42.526162  307829 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 09:16:42.526271  307829 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 09:16:42.526404  307829 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 09:16:42.526415  307829 kubeadm.go:318] 
	I1018 09:16:42.526533  307829 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 09:16:42.526640  307829 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 09:16:42.526665  307829 kubeadm.go:318] 
	I1018 09:16:42.526797  307829 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8444 --token f3p04i.qkc1arqowwwf8733 \
	I1018 09:16:42.526958  307829 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:03f732b5d900f8eb7de41cf71a6356f3c4edf03d7a3795a959179e2391e7734f \
	I1018 09:16:42.526992  307829 kubeadm.go:318] 	--control-plane 
	I1018 09:16:42.526998  307829 kubeadm.go:318] 
	I1018 09:16:42.527113  307829 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 09:16:42.527135  307829 kubeadm.go:318] 
	I1018 09:16:42.527260  307829 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8444 --token f3p04i.qkc1arqowwwf8733 \
	I1018 09:16:42.527431  307829 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:03f732b5d900f8eb7de41cf71a6356f3c4edf03d7a3795a959179e2391e7734f 
	I1018 09:16:42.530266  307829 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1018 09:16:42.530442  307829 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 09:16:42.530478  307829 cni.go:84] Creating CNI manager for ""
	I1018 09:16:42.530499  307829 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:16:42.533104  307829 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1018 09:16:40.796098  302609 pod_ready.go:104] pod "coredns-5dd5756b68-gwttp" is not "Ready", error: <nil>
	W1018 09:16:43.293707  302609 pod_ready.go:104] pod "coredns-5dd5756b68-gwttp" is not "Ready", error: <nil>
	I1018 09:16:42.277552  295389 pod_ready.go:94] pod "kube-controller-manager-embed-certs-880603" is "Ready"
	I1018 09:16:42.277584  295389 pod_ready.go:86] duration metric: took 381.302407ms for pod "kube-controller-manager-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:42.477778  295389 pod_ready.go:83] waiting for pod "kube-proxy-k4kcs" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:42.878053  295389 pod_ready.go:94] pod "kube-proxy-k4kcs" is "Ready"
	I1018 09:16:42.878082  295389 pod_ready.go:86] duration metric: took 400.281372ms for pod "kube-proxy-k4kcs" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:43.078230  295389 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:43.478123  295389 pod_ready.go:94] pod "kube-scheduler-embed-certs-880603" is "Ready"
	I1018 09:16:43.478149  295389 pod_ready.go:86] duration metric: took 399.897961ms for pod "kube-scheduler-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:43.478161  295389 pod_ready.go:40] duration metric: took 1.604642015s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:16:43.525821  295389 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 09:16:43.527615  295389 out.go:179] * Done! kubectl is now configured to use "embed-certs-880603" cluster and "default" namespace by default
	I1018 09:16:42.534703  307829 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 09:16:42.539385  307829 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 09:16:42.539408  307829 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 09:16:42.553684  307829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 09:16:42.780522  307829 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 09:16:42.780591  307829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:16:42.780624  307829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-986220 minikube.k8s.io/updated_at=2025_10_18T09_16_42_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2a39cecdc22b5fb611b15c7501c7459c3b4d2820 minikube.k8s.io/name=default-k8s-diff-port-986220 minikube.k8s.io/primary=true
	I1018 09:16:42.792392  307829 ops.go:34] apiserver oom_adj: -16
	I1018 09:16:42.879101  307829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:16:43.380201  307829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:16:43.879591  307829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:16:44.380139  307829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1018 09:16:43.565238  309439 pod_ready.go:104] pod "coredns-66bc5c9577-h44wj" is not "Ready", error: <nil>
	W1018 09:16:46.064027  309439 pod_ready.go:104] pod "coredns-66bc5c9577-h44wj" is not "Ready", error: <nil>
	I1018 09:16:44.879531  307829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:16:45.379171  307829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:16:45.880033  307829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:16:46.380235  307829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:16:46.879320  307829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:16:46.954716  307829 kubeadm.go:1113] duration metric: took 4.174183533s to wait for elevateKubeSystemPrivileges
	I1018 09:16:46.954763  307829 kubeadm.go:402] duration metric: took 18.898789866s to StartCluster
	I1018 09:16:46.954787  307829 settings.go:142] acquiring lock: {Name:mk177870d6cf7000f95346d8b9c104ade730278a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:16:46.954887  307829 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-5897/kubeconfig
	I1018 09:16:46.956811  307829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/kubeconfig: {Name:mkbdf22e0f6c3f9f36e4cde3352b620d43ace448 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:16:46.957059  307829 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 09:16:46.957068  307829 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:16:46.957164  307829 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 09:16:46.957257  307829 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-986220"
	I1018 09:16:46.957273  307829 config.go:182] Loaded profile config "default-k8s-diff-port-986220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:16:46.957277  307829 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-986220"
	I1018 09:16:46.957273  307829 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-986220"
	I1018 09:16:46.957302  307829 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-986220"
	I1018 09:16:46.957324  307829 host.go:66] Checking if "default-k8s-diff-port-986220" exists ...
	I1018 09:16:46.957748  307829 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-986220 --format={{.State.Status}}
	I1018 09:16:46.957965  307829 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-986220 --format={{.State.Status}}
	I1018 09:16:46.959069  307829 out.go:179] * Verifying Kubernetes components...
	I1018 09:16:46.960365  307829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:16:46.985389  307829 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:16:46.986830  307829 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:16:46.986853  307829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 09:16:46.986931  307829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-986220
	I1018 09:16:46.987456  307829 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-986220"
	I1018 09:16:46.987508  307829 host.go:66] Checking if "default-k8s-diff-port-986220" exists ...
	I1018 09:16:46.988044  307829 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-986220 --format={{.State.Status}}
	I1018 09:16:47.018839  307829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/default-k8s-diff-port-986220/id_rsa Username:docker}
	I1018 09:16:47.025009  307829 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 09:16:47.025036  307829 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 09:16:47.025097  307829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-986220
	I1018 09:16:47.048123  307829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/default-k8s-diff-port-986220/id_rsa Username:docker}
	I1018 09:16:47.060152  307829 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 09:16:47.123586  307829 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:16:47.141939  307829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:16:47.165247  307829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 09:16:47.266448  307829 start.go:976] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1018 09:16:47.268332  307829 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-986220" to be "Ready" ...
	I1018 09:16:47.477534  307829 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1018 09:16:45.293985  302609 pod_ready.go:104] pod "coredns-5dd5756b68-gwttp" is not "Ready", error: <nil>
	I1018 09:16:46.294258  302609 pod_ready.go:94] pod "coredns-5dd5756b68-gwttp" is "Ready"
	I1018 09:16:46.294287  302609 pod_ready.go:86] duration metric: took 31.006532603s for pod "coredns-5dd5756b68-gwttp" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:46.297373  302609 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-951975" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:46.302066  302609 pod_ready.go:94] pod "etcd-old-k8s-version-951975" is "Ready"
	I1018 09:16:46.302090  302609 pod_ready.go:86] duration metric: took 4.692329ms for pod "etcd-old-k8s-version-951975" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:46.305138  302609 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-951975" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:46.309671  302609 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-951975" is "Ready"
	I1018 09:16:46.309694  302609 pod_ready.go:86] duration metric: took 4.527103ms for pod "kube-apiserver-old-k8s-version-951975" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:46.312739  302609 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-951975" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:46.492306  302609 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-951975" is "Ready"
	I1018 09:16:46.492330  302609 pod_ready.go:86] duration metric: took 179.571371ms for pod "kube-controller-manager-old-k8s-version-951975" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:46.692512  302609 pod_ready.go:83] waiting for pod "kube-proxy-rrzqp" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:47.093588  302609 pod_ready.go:94] pod "kube-proxy-rrzqp" is "Ready"
	I1018 09:16:47.093616  302609 pod_ready.go:86] duration metric: took 401.079405ms for pod "kube-proxy-rrzqp" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:47.294894  302609 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-951975" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:47.692538  302609 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-951975" is "Ready"
	I1018 09:16:47.692575  302609 pod_ready.go:86] duration metric: took 397.645548ms for pod "kube-scheduler-old-k8s-version-951975" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:47.692591  302609 pod_ready.go:40] duration metric: took 32.409896584s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:16:47.741097  302609 start.go:624] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1018 09:16:47.743375  302609 out.go:203] 
	W1018 09:16:47.744703  302609 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1018 09:16:47.745945  302609 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1018 09:16:47.747222  302609 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-951975" cluster and "default" namespace by default
	I1018 09:16:47.478905  307829 addons.go:514] duration metric: took 521.738605ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1018 09:16:47.773018  307829 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-986220" context rescaled to 1 replicas
	W1018 09:16:49.271321  307829 node_ready.go:57] node "default-k8s-diff-port-986220" has "Ready":"False" status (will retry)
	W1018 09:16:48.064660  309439 pod_ready.go:104] pod "coredns-66bc5c9577-h44wj" is not "Ready", error: <nil>
	W1018 09:16:50.564043  309439 pod_ready.go:104] pod "coredns-66bc5c9577-h44wj" is not "Ready", error: <nil>
	W1018 09:16:51.272001  307829 node_ready.go:57] node "default-k8s-diff-port-986220" has "Ready":"False" status (will retry)
	W1018 09:16:53.272439  307829 node_ready.go:57] node "default-k8s-diff-port-986220" has "Ready":"False" status (will retry)
	W1018 09:16:52.566364  309439 pod_ready.go:104] pod "coredns-66bc5c9577-h44wj" is not "Ready", error: <nil>
	W1018 09:16:55.064035  309439 pod_ready.go:104] pod "coredns-66bc5c9577-h44wj" is not "Ready", error: <nil>
	W1018 09:16:55.772336  307829 node_ready.go:57] node "default-k8s-diff-port-986220" has "Ready":"False" status (will retry)
	I1018 09:16:58.271807  307829 node_ready.go:49] node "default-k8s-diff-port-986220" is "Ready"
	I1018 09:16:58.271837  307829 node_ready.go:38] duration metric: took 11.003458928s for node "default-k8s-diff-port-986220" to be "Ready" ...
	I1018 09:16:58.271850  307829 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:16:58.271895  307829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:16:58.285057  307829 api_server.go:72] duration metric: took 11.327963121s to wait for apiserver process to appear ...
	I1018 09:16:58.285082  307829 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:16:58.285099  307829 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1018 09:16:58.289252  307829 api_server.go:279] https://192.168.94.2:8444/healthz returned 200:
	ok
	I1018 09:16:58.290394  307829 api_server.go:141] control plane version: v1.34.1
	I1018 09:16:58.290423  307829 api_server.go:131] duration metric: took 5.333954ms to wait for apiserver health ...
	I1018 09:16:58.290434  307829 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:16:58.293660  307829 system_pods.go:59] 8 kube-system pods found
	I1018 09:16:58.293697  307829 system_pods.go:61] "coredns-66bc5c9577-bpcsk" [d89ef1c8-1a4b-41b5-9ecf-66daaae426ba] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:16:58.293708  307829 system_pods.go:61] "etcd-default-k8s-diff-port-986220" [67037dca-39e0-4261-91f2-a5d11ca68620] Running
	I1018 09:16:58.293718  307829 system_pods.go:61] "kindnet-cj6bv" [b21a6117-74a1-4a94-9dc4-3ba0856e6712] Running
	I1018 09:16:58.293724  307829 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-986220" [26542843-30b7-4103-86cd-3f8870606b3f] Running
	I1018 09:16:58.293729  307829 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-986220" [a8b55bcd-a0ae-4ef8-9193-06104a304545] Running
	I1018 09:16:58.293735  307829 system_pods.go:61] "kube-proxy-vvtpl" [3be57d5a-db16-4280-936c-af1a1e022017] Running
	I1018 09:16:58.293741  307829 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-986220" [baa56190-deca-42c8-a305-04e13a5a0868] Running
	I1018 09:16:58.293762  307829 system_pods.go:61] "storage-provisioner" [6b5391e1-9c35-460f-b52d-8d434084db0e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:16:58.293774  307829 system_pods.go:74] duration metric: took 3.332829ms to wait for pod list to return data ...
	I1018 09:16:58.293789  307829 default_sa.go:34] waiting for default service account to be created ...
	I1018 09:16:58.296393  307829 default_sa.go:45] found service account: "default"
	I1018 09:16:58.296417  307829 default_sa.go:55] duration metric: took 2.620669ms for default service account to be created ...
	I1018 09:16:58.296426  307829 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 09:16:58.299232  307829 system_pods.go:86] 8 kube-system pods found
	I1018 09:16:58.299265  307829 system_pods.go:89] "coredns-66bc5c9577-bpcsk" [d89ef1c8-1a4b-41b5-9ecf-66daaae426ba] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:16:58.299273  307829 system_pods.go:89] "etcd-default-k8s-diff-port-986220" [67037dca-39e0-4261-91f2-a5d11ca68620] Running
	I1018 09:16:58.299281  307829 system_pods.go:89] "kindnet-cj6bv" [b21a6117-74a1-4a94-9dc4-3ba0856e6712] Running
	I1018 09:16:58.299287  307829 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-986220" [26542843-30b7-4103-86cd-3f8870606b3f] Running
	I1018 09:16:58.299294  307829 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-986220" [a8b55bcd-a0ae-4ef8-9193-06104a304545] Running
	I1018 09:16:58.299300  307829 system_pods.go:89] "kube-proxy-vvtpl" [3be57d5a-db16-4280-936c-af1a1e022017] Running
	I1018 09:16:58.299306  307829 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-986220" [baa56190-deca-42c8-a305-04e13a5a0868] Running
	I1018 09:16:58.299317  307829 system_pods.go:89] "storage-provisioner" [6b5391e1-9c35-460f-b52d-8d434084db0e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:16:58.299379  307829 retry.go:31] will retry after 222.938941ms: missing components: kube-dns
	I1018 09:16:58.526793  307829 system_pods.go:86] 8 kube-system pods found
	I1018 09:16:58.526823  307829 system_pods.go:89] "coredns-66bc5c9577-bpcsk" [d89ef1c8-1a4b-41b5-9ecf-66daaae426ba] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:16:58.526829  307829 system_pods.go:89] "etcd-default-k8s-diff-port-986220" [67037dca-39e0-4261-91f2-a5d11ca68620] Running
	I1018 09:16:58.526835  307829 system_pods.go:89] "kindnet-cj6bv" [b21a6117-74a1-4a94-9dc4-3ba0856e6712] Running
	I1018 09:16:58.526838  307829 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-986220" [26542843-30b7-4103-86cd-3f8870606b3f] Running
	I1018 09:16:58.526842  307829 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-986220" [a8b55bcd-a0ae-4ef8-9193-06104a304545] Running
	I1018 09:16:58.526845  307829 system_pods.go:89] "kube-proxy-vvtpl" [3be57d5a-db16-4280-936c-af1a1e022017] Running
	I1018 09:16:58.526848  307829 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-986220" [baa56190-deca-42c8-a305-04e13a5a0868] Running
	I1018 09:16:58.526854  307829 system_pods.go:89] "storage-provisioner" [6b5391e1-9c35-460f-b52d-8d434084db0e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:16:58.526867  307829 retry.go:31] will retry after 336.886717ms: missing components: kube-dns
	I1018 09:16:58.867886  307829 system_pods.go:86] 8 kube-system pods found
	I1018 09:16:58.867916  307829 system_pods.go:89] "coredns-66bc5c9577-bpcsk" [d89ef1c8-1a4b-41b5-9ecf-66daaae426ba] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:16:58.867921  307829 system_pods.go:89] "etcd-default-k8s-diff-port-986220" [67037dca-39e0-4261-91f2-a5d11ca68620] Running
	I1018 09:16:58.867929  307829 system_pods.go:89] "kindnet-cj6bv" [b21a6117-74a1-4a94-9dc4-3ba0856e6712] Running
	I1018 09:16:58.867935  307829 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-986220" [26542843-30b7-4103-86cd-3f8870606b3f] Running
	I1018 09:16:58.867942  307829 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-986220" [a8b55bcd-a0ae-4ef8-9193-06104a304545] Running
	I1018 09:16:58.867950  307829 system_pods.go:89] "kube-proxy-vvtpl" [3be57d5a-db16-4280-936c-af1a1e022017] Running
	I1018 09:16:58.867955  307829 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-986220" [baa56190-deca-42c8-a305-04e13a5a0868] Running
	I1018 09:16:58.867966  307829 system_pods.go:89] "storage-provisioner" [6b5391e1-9c35-460f-b52d-8d434084db0e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:16:58.867984  307829 retry.go:31] will retry after 410.478388ms: missing components: kube-dns
	I1018 09:16:59.283325  307829 system_pods.go:86] 8 kube-system pods found
	I1018 09:16:59.283375  307829 system_pods.go:89] "coredns-66bc5c9577-bpcsk" [d89ef1c8-1a4b-41b5-9ecf-66daaae426ba] Running
	I1018 09:16:59.283385  307829 system_pods.go:89] "etcd-default-k8s-diff-port-986220" [67037dca-39e0-4261-91f2-a5d11ca68620] Running
	I1018 09:16:59.283393  307829 system_pods.go:89] "kindnet-cj6bv" [b21a6117-74a1-4a94-9dc4-3ba0856e6712] Running
	I1018 09:16:59.283399  307829 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-986220" [26542843-30b7-4103-86cd-3f8870606b3f] Running
	I1018 09:16:59.283404  307829 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-986220" [a8b55bcd-a0ae-4ef8-9193-06104a304545] Running
	I1018 09:16:59.283409  307829 system_pods.go:89] "kube-proxy-vvtpl" [3be57d5a-db16-4280-936c-af1a1e022017] Running
	I1018 09:16:59.283417  307829 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-986220" [baa56190-deca-42c8-a305-04e13a5a0868] Running
	I1018 09:16:59.283421  307829 system_pods.go:89] "storage-provisioner" [6b5391e1-9c35-460f-b52d-8d434084db0e] Running
	I1018 09:16:59.283432  307829 system_pods.go:126] duration metric: took 986.998816ms to wait for k8s-apps to be running ...
	I1018 09:16:59.283448  307829 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 09:16:59.283507  307829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:16:59.299136  307829 system_svc.go:56] duration metric: took 15.661893ms WaitForService to wait for kubelet
	I1018 09:16:59.299172  307829 kubeadm.go:586] duration metric: took 12.342081392s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:16:59.299193  307829 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:16:59.302607  307829 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 09:16:59.302636  307829 node_conditions.go:123] node cpu capacity is 8
	I1018 09:16:59.302648  307829 node_conditions.go:105] duration metric: took 3.45011ms to run NodePressure ...
	I1018 09:16:59.302660  307829 start.go:241] waiting for startup goroutines ...
	I1018 09:16:59.302666  307829 start.go:246] waiting for cluster config update ...
	I1018 09:16:59.302677  307829 start.go:255] writing updated cluster config ...
	I1018 09:16:59.303823  307829 ssh_runner.go:195] Run: rm -f paused
	I1018 09:16:59.308246  307829 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:16:59.313062  307829 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bpcsk" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:59.318310  307829 pod_ready.go:94] pod "coredns-66bc5c9577-bpcsk" is "Ready"
	I1018 09:16:59.318333  307829 pod_ready.go:86] duration metric: took 5.242596ms for pod "coredns-66bc5c9577-bpcsk" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:59.320834  307829 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-986220" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:59.325424  307829 pod_ready.go:94] pod "etcd-default-k8s-diff-port-986220" is "Ready"
	I1018 09:16:59.325454  307829 pod_ready.go:86] duration metric: took 4.599009ms for pod "etcd-default-k8s-diff-port-986220" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:59.327639  307829 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-986220" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:59.332166  307829 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-986220" is "Ready"
	I1018 09:16:59.332189  307829 pod_ready.go:86] duration metric: took 4.523446ms for pod "kube-apiserver-default-k8s-diff-port-986220" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:59.334399  307829 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-986220" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:59.714156  307829 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-986220" is "Ready"
	I1018 09:16:59.714185  307829 pod_ready.go:86] duration metric: took 379.76296ms for pod "kube-controller-manager-default-k8s-diff-port-986220" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:59.913655  307829 pod_ready.go:83] waiting for pod "kube-proxy-vvtpl" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:00.313187  307829 pod_ready.go:94] pod "kube-proxy-vvtpl" is "Ready"
	I1018 09:17:00.313213  307829 pod_ready.go:86] duration metric: took 399.535476ms for pod "kube-proxy-vvtpl" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:00.514099  307829 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-986220" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:00.914005  307829 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-986220" is "Ready"
	I1018 09:17:00.914034  307829 pod_ready.go:86] duration metric: took 399.907153ms for pod "kube-scheduler-default-k8s-diff-port-986220" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:00.914046  307829 pod_ready.go:40] duration metric: took 1.605764724s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:17:00.963127  307829 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 09:17:00.965147  307829 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-986220" cluster and "default" namespace by default
	W1018 09:16:57.064222  309439 pod_ready.go:104] pod "coredns-66bc5c9577-h44wj" is not "Ready", error: <nil>
	W1018 09:16:59.064638  309439 pod_ready.go:104] pod "coredns-66bc5c9577-h44wj" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 18 09:16:34 old-k8s-version-951975 crio[561]: time="2025-10-18T09:16:34.426913522Z" level=info msg="Started container" PID=1706 containerID=c86035a98d2e811d86f1f11558369ba8ac7d1618e695578b959012bc810d6cfe description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zdj6d/dashboard-metrics-scraper id=65c7597e-8ea2-4732-94ea-6e1562da1940 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e165057c7cc8b14fb63454b0740dfeb6ed8f3117a94c168883cfb9822356007c
	Oct 18 09:16:35 old-k8s-version-951975 crio[561]: time="2025-10-18T09:16:35.381463137Z" level=info msg="Removing container: 1bfa96685a18fe9ed455af32503237123856585da9cd413b1048c58c828adbd5" id=c0d7fc08-261f-445b-a9e1-c3059f068ec6 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 09:16:35 old-k8s-version-951975 crio[561]: time="2025-10-18T09:16:35.394135672Z" level=info msg="Removed container 1bfa96685a18fe9ed455af32503237123856585da9cd413b1048c58c828adbd5: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zdj6d/dashboard-metrics-scraper" id=c0d7fc08-261f-445b-a9e1-c3059f068ec6 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 09:16:45 old-k8s-version-951975 crio[561]: time="2025-10-18T09:16:45.407665795Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=ad53ecd6-a5b5-472d-9d8f-89a5915d8acd name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:16:45 old-k8s-version-951975 crio[561]: time="2025-10-18T09:16:45.408714106Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=3c95a7e7-6160-43cd-b4c7-8aba2a8a9471 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:16:45 old-k8s-version-951975 crio[561]: time="2025-10-18T09:16:45.410291362Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=adccd699-6b5d-41e2-9b63-100b00fca986 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:16:45 old-k8s-version-951975 crio[561]: time="2025-10-18T09:16:45.414007422Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:16:45 old-k8s-version-951975 crio[561]: time="2025-10-18T09:16:45.420551436Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:16:45 old-k8s-version-951975 crio[561]: time="2025-10-18T09:16:45.420723163Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/eeff2469445ced83e17307204e3a6fc44e67a8d3667b3e127180227b3e96d406/merged/etc/passwd: no such file or directory"
	Oct 18 09:16:45 old-k8s-version-951975 crio[561]: time="2025-10-18T09:16:45.420751506Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/eeff2469445ced83e17307204e3a6fc44e67a8d3667b3e127180227b3e96d406/merged/etc/group: no such file or directory"
	Oct 18 09:16:45 old-k8s-version-951975 crio[561]: time="2025-10-18T09:16:45.421035499Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:16:45 old-k8s-version-951975 crio[561]: time="2025-10-18T09:16:45.449780046Z" level=info msg="Created container dd5dcb7d66045a0152ff8c078146a86a98fcc49d2df4f7b6dd15a94d89058078: kube-system/storage-provisioner/storage-provisioner" id=adccd699-6b5d-41e2-9b63-100b00fca986 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:16:45 old-k8s-version-951975 crio[561]: time="2025-10-18T09:16:45.45047372Z" level=info msg="Starting container: dd5dcb7d66045a0152ff8c078146a86a98fcc49d2df4f7b6dd15a94d89058078" id=a54e2822-1f4b-456b-9aaa-2c113750295d name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:16:45 old-k8s-version-951975 crio[561]: time="2025-10-18T09:16:45.452405741Z" level=info msg="Started container" PID=1725 containerID=dd5dcb7d66045a0152ff8c078146a86a98fcc49d2df4f7b6dd15a94d89058078 description=kube-system/storage-provisioner/storage-provisioner id=a54e2822-1f4b-456b-9aaa-2c113750295d name=/runtime.v1.RuntimeService/StartContainer sandboxID=6467b0224b47e15db874d81ac86ce634390aec4c2f6b72fb5fb296bfcf5aadb6
	Oct 18 09:16:49 old-k8s-version-951975 crio[561]: time="2025-10-18T09:16:49.278240973Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b5d8df68-6115-4e4a-aed2-ae00a4b47579 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:16:49 old-k8s-version-951975 crio[561]: time="2025-10-18T09:16:49.279277607Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=2f834347-105d-48d7-9ac3-d136d1d5bfc6 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:16:49 old-k8s-version-951975 crio[561]: time="2025-10-18T09:16:49.280294822Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zdj6d/dashboard-metrics-scraper" id=276118ca-f3d5-44a0-aa8d-5c1b84ed523a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:16:49 old-k8s-version-951975 crio[561]: time="2025-10-18T09:16:49.280591762Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:16:49 old-k8s-version-951975 crio[561]: time="2025-10-18T09:16:49.28600226Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:16:49 old-k8s-version-951975 crio[561]: time="2025-10-18T09:16:49.286492634Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:16:49 old-k8s-version-951975 crio[561]: time="2025-10-18T09:16:49.323607414Z" level=info msg="Created container f59a4536a6be07648a7d886609196032c6dd09725a1a2250b74c79dd1ca7a6ee: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zdj6d/dashboard-metrics-scraper" id=276118ca-f3d5-44a0-aa8d-5c1b84ed523a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:16:49 old-k8s-version-951975 crio[561]: time="2025-10-18T09:16:49.324302697Z" level=info msg="Starting container: f59a4536a6be07648a7d886609196032c6dd09725a1a2250b74c79dd1ca7a6ee" id=ed99e408-266e-47f5-b569-ab81c2443184 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:16:49 old-k8s-version-951975 crio[561]: time="2025-10-18T09:16:49.32652895Z" level=info msg="Started container" PID=1762 containerID=f59a4536a6be07648a7d886609196032c6dd09725a1a2250b74c79dd1ca7a6ee description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zdj6d/dashboard-metrics-scraper id=ed99e408-266e-47f5-b569-ab81c2443184 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e165057c7cc8b14fb63454b0740dfeb6ed8f3117a94c168883cfb9822356007c
	Oct 18 09:16:49 old-k8s-version-951975 crio[561]: time="2025-10-18T09:16:49.422888771Z" level=info msg="Removing container: c86035a98d2e811d86f1f11558369ba8ac7d1618e695578b959012bc810d6cfe" id=6fcce697-eccd-4c8a-b809-dabbb7adc3bd name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 09:16:49 old-k8s-version-951975 crio[561]: time="2025-10-18T09:16:49.432927701Z" level=info msg="Removed container c86035a98d2e811d86f1f11558369ba8ac7d1618e695578b959012bc810d6cfe: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zdj6d/dashboard-metrics-scraper" id=6fcce697-eccd-4c8a-b809-dabbb7adc3bd name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	f59a4536a6be0       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           13 seconds ago      Exited              dashboard-metrics-scraper   2                   e165057c7cc8b       dashboard-metrics-scraper-5f989dc9cf-zdj6d       kubernetes-dashboard
	dd5dcb7d66045       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           17 seconds ago      Running             storage-provisioner         1                   6467b0224b47e       storage-provisioner                              kube-system
	1731b366ef3fd       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   30 seconds ago      Running             kubernetes-dashboard        0                   520ec419f0c84       kubernetes-dashboard-8694d4445c-qms7p            kubernetes-dashboard
	850ecec987439       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           48 seconds ago      Running             coredns                     0                   6b5019e0a7a2b       coredns-5dd5756b68-gwttp                         kube-system
	9a9fa34e51033       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           48 seconds ago      Running             busybox                     1                   24ee2b343ec2d       busybox                                          default
	e13798224f38d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           48 seconds ago      Exited              storage-provisioner         0                   6467b0224b47e       storage-provisioner                              kube-system
	37cdebb50e345       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           48 seconds ago      Running             kube-proxy                  0                   0527ef968c498       kube-proxy-rrzqp                                 kube-system
	c707d266c99b4       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           48 seconds ago      Running             kindnet-cni                 0                   926cfe19e44bf       kindnet-k2756                                    kube-system
	16a4a0198ff18       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           52 seconds ago      Running             kube-controller-manager     0                   81adcb4e1b7bc       kube-controller-manager-old-k8s-version-951975   kube-system
	b2de01dc9072c       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           52 seconds ago      Running             kube-scheduler              0                   f3ee9d25a0bc8       kube-scheduler-old-k8s-version-951975            kube-system
	f2e7310b9fd30       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           52 seconds ago      Running             kube-apiserver              0                   21f7b008a8428       kube-apiserver-old-k8s-version-951975            kube-system
	1d1d7b9a46038       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           52 seconds ago      Running             etcd                        0                   e7764baa63e9a       etcd-old-k8s-version-951975                      kube-system
	
	
	==> coredns [850ecec987439ee84e6448cada291df9cce48b7f0c730a4f0638f43a13af3bc0] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:52869 - 7986 "HINFO IN 6955480671343103566.8601139003691296637. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.865316514s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-951975
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-951975
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a39cecdc22b5fb611b15c7501c7459c3b4d2820
	                    minikube.k8s.io/name=old-k8s-version-951975
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T09_15_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 09:15:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-951975
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 09:16:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 09:16:44 +0000   Sat, 18 Oct 2025 09:15:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 09:16:44 +0000   Sat, 18 Oct 2025 09:15:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 09:16:44 +0000   Sat, 18 Oct 2025 09:15:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 09:16:44 +0000   Sat, 18 Oct 2025 09:15:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-951975
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                bca7ca56-ad4d-4955-80a5-36cf90a3bf8e
	  Boot ID:                    e8d7ef1f-87bb-488c-8381-e18fe85b484f
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 coredns-5dd5756b68-gwttp                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     105s
	  kube-system                 etcd-old-k8s-version-951975                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         117s
	  kube-system                 kindnet-k2756                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      105s
	  kube-system                 kube-apiserver-old-k8s-version-951975             250m (3%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-controller-manager-old-k8s-version-951975    200m (2%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-rrzqp                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-old-k8s-version-951975             100m (1%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-zdj6d        0 (0%)        0 (0%)      0 (0%)           0 (0%)         35s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-qms7p             0 (0%)        0 (0%)      0 (0%)           0 (0%)         35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 104s                 kube-proxy       
	  Normal  Starting                 48s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m3s (x8 over 2m3s)  kubelet          Node old-k8s-version-951975 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m3s (x8 over 2m3s)  kubelet          Node old-k8s-version-951975 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m3s (x8 over 2m3s)  kubelet          Node old-k8s-version-951975 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     117s                 kubelet          Node old-k8s-version-951975 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  117s                 kubelet          Node old-k8s-version-951975 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    117s                 kubelet          Node old-k8s-version-951975 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 117s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           106s                 node-controller  Node old-k8s-version-951975 event: Registered Node old-k8s-version-951975 in Controller
	  Normal  NodeReady                91s                  kubelet          Node old-k8s-version-951975 status is now: NodeReady
	  Normal  Starting                 52s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  52s (x8 over 52s)    kubelet          Node old-k8s-version-951975 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    52s (x8 over 52s)    kubelet          Node old-k8s-version-951975 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     52s (x8 over 52s)    kubelet          Node old-k8s-version-951975 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           36s                  node-controller  Node old-k8s-version-951975 event: Registered Node old-k8s-version-951975 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3e 73 e0 1e 4f 5c 08 06
	[  +0.001176] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9e 01 6a be c1 ed 08 06
	[  +1.096145] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 92 07 d0 c5 bc 08 06
	[  +0.000393] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff de 8d 0a a3 cc 78 08 06
	[ +17.591772] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 8a 16 36 e8 43 c0 08 06
	[  +0.000413] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3e 73 e0 1e 4f 5c 08 06
	[ +11.820741] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 36 5b 8e 46 ea 47 08 06
	[Oct18 09:15] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 28 86 11 d4 e9 08 06
	[  +0.032974] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 76 2d 83 26 2e 28 08 06
	[  +4.435535] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 22 e2 07 5a 3b 4a 08 06
	[  +0.000382] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 36 5b 8e 46 ea 47 08 06
	[ +43.809014] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 22 6f 4b 2b 7f 46 08 06
	[  +0.000367] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 62 28 86 11 d4 e9 08 06
	
	
	==> etcd [1d1d7b9a4603835edfbcabef69e64877d18a1499301245bf79771003e000b780] <==
	{"level":"info","ts":"2025-10-18T09:16:10.853815Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-18T09:16:10.854271Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 switched to configuration voters=(17451554867067011209)"}
	{"level":"info","ts":"2025-10-18T09:16:10.854394Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"]}
	{"level":"info","ts":"2025-10-18T09:16:10.854541Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-18T09:16:10.854574Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-18T09:16:10.853979Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-18T09:16:10.856237Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-18T09:16:10.856498Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-18T09:16:10.856531Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-18T09:16:10.856632Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-10-18T09:16:10.856646Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-10-18T09:16:12.543825Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-18T09:16:12.5439Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-18T09:16:12.543919Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-10-18T09:16:12.543941Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2025-10-18T09:16:12.543947Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-10-18T09:16:12.543955Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2025-10-18T09:16:12.543965Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-10-18T09:16:12.545465Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-951975 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-18T09:16:12.545467Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-18T09:16:12.545502Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-18T09:16:12.545964Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-18T09:16:12.54602Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-18T09:16:12.547779Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-10-18T09:16:12.547784Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 09:17:03 up 59 min,  0 user,  load average: 3.51, 3.42, 2.35
	Linux old-k8s-version-951975 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c707d266c99b4e19a4a07275b8e8367a1594b6cf94012a72f161afb9027cd1cf] <==
	I1018 09:16:14.852506       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 09:16:14.852763       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1018 09:16:14.852964       1 main.go:148] setting mtu 1500 for CNI 
	I1018 09:16:14.852988       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 09:16:14.853002       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T09:16:15Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 09:16:15.153693       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 09:16:15.154240       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 09:16:15.154264       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 09:16:15.154480       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 09:16:15.555079       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 09:16:15.555105       1 metrics.go:72] Registering metrics
	I1018 09:16:15.555175       1 controller.go:711] "Syncing nftables rules"
	I1018 09:16:25.153911       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1018 09:16:25.153966       1 main.go:301] handling current node
	I1018 09:16:35.153454       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1018 09:16:35.153506       1 main.go:301] handling current node
	I1018 09:16:45.153453       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1018 09:16:45.153484       1 main.go:301] handling current node
	I1018 09:16:55.158383       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1018 09:16:55.158419       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f2e7310b9fd30510062cf4fc3f3196d0199a8bb693ccf374fe7926da05bc717a] <==
	I1018 09:16:13.534877       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1018 09:16:13.606105       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 09:16:13.630546       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1018 09:16:13.630569       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1018 09:16:13.630567       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1018 09:16:13.630895       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 09:16:13.631498       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1018 09:16:13.636761       1 shared_informer.go:318] Caches are synced for configmaps
	I1018 09:16:13.652396       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1018 09:16:13.669873       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1018 09:16:13.669983       1 aggregator.go:166] initial CRD sync complete...
	I1018 09:16:13.670015       1 autoregister_controller.go:141] Starting autoregister controller
	I1018 09:16:13.670044       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 09:16:13.670057       1 cache.go:39] Caches are synced for autoregister controller
	I1018 09:16:14.521542       1 controller.go:624] quota admission added evaluator for: namespaces
	I1018 09:16:14.533856       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 09:16:14.563691       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1018 09:16:14.611939       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 09:16:14.627857       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 09:16:14.643013       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1018 09:16:14.708585       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.33.157"}
	I1018 09:16:14.725387       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.39.212"}
	I1018 09:16:26.832066       1 controller.go:624] quota admission added evaluator for: endpoints
	I1018 09:16:27.078814       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1018 09:16:27.128533       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [16a4a0198ff18096f38de4bc58c31bf5f03bdf37076c3c4e4d32e4fb7d38b886] <==
	I1018 09:16:26.985108       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="90.212µs"
	I1018 09:16:27.086201       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1018 09:16:27.086536       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1018 09:16:27.096418       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-qms7p"
	I1018 09:16:27.097114       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-zdj6d"
	I1018 09:16:27.102373       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="16.690529ms"
	I1018 09:16:27.106011       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="20.148807ms"
	I1018 09:16:27.110295       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="7.862564ms"
	I1018 09:16:27.110436       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="91.363µs"
	I1018 09:16:27.116160       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="10.089678ms"
	I1018 09:16:27.116374       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="62.545µs"
	I1018 09:16:27.123809       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="89.048µs"
	I1018 09:16:27.143767       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="127.979µs"
	I1018 09:16:27.153924       1 shared_informer.go:318] Caches are synced for garbage collector
	I1018 09:16:27.173907       1 shared_informer.go:318] Caches are synced for garbage collector
	I1018 09:16:27.173943       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1018 09:16:32.393636       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="11.734772ms"
	I1018 09:16:32.394733       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="115.652µs"
	I1018 09:16:34.386868       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="87.124µs"
	I1018 09:16:35.391866       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="191.196µs"
	I1018 09:16:36.400499       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="179.455µs"
	I1018 09:16:46.002554       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.426269ms"
	I1018 09:16:46.002699       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="88.449µs"
	I1018 09:16:49.434091       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="84.085µs"
	I1018 09:16:57.419672       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="86.023µs"
	
	
	==> kube-proxy [37cdebb50e3452e3797b2554403f29b4e05357c580e8231a729dc63a87d0f932] <==
	I1018 09:16:14.719539       1 server_others.go:69] "Using iptables proxy"
	I1018 09:16:14.729544       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1018 09:16:14.752777       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 09:16:14.755817       1 server_others.go:152] "Using iptables Proxier"
	I1018 09:16:14.755867       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1018 09:16:14.755878       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1018 09:16:14.755920       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1018 09:16:14.757098       1 server.go:846] "Version info" version="v1.28.0"
	I1018 09:16:14.757128       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:16:14.758251       1 config.go:188] "Starting service config controller"
	I1018 09:16:14.758297       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1018 09:16:14.758479       1 config.go:315] "Starting node config controller"
	I1018 09:16:14.758551       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1018 09:16:14.758490       1 config.go:97] "Starting endpoint slice config controller"
	I1018 09:16:14.758609       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1018 09:16:14.859013       1 shared_informer.go:318] Caches are synced for service config
	I1018 09:16:14.859034       1 shared_informer.go:318] Caches are synced for node config
	I1018 09:16:14.860179       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [b2de01dc9072ccffefae3182aec6a17d04655623980355d4f88424a0d4e01818] <==
	I1018 09:16:11.463746       1 serving.go:348] Generated self-signed cert in-memory
	W1018 09:16:13.594708       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 09:16:13.594754       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 09:16:13.594767       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 09:16:13.594777       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 09:16:13.615789       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1018 09:16:13.615815       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:16:13.617170       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:16:13.617207       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1018 09:16:13.618049       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1018 09:16:13.618083       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1018 09:16:13.717632       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 18 09:16:27 old-k8s-version-951975 kubelet[718]: I1018 09:16:27.106911     718 topology_manager.go:215] "Topology Admit Handler" podUID="55b6301a-677b-42eb-90f9-ff3b66ddb759" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-qms7p"
	Oct 18 09:16:27 old-k8s-version-951975 kubelet[718]: I1018 09:16:27.179146     718 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9xhh\" (UniqueName: \"kubernetes.io/projected/ee98ebcc-4473-4573-a4a4-e4f65da59d9b-kube-api-access-b9xhh\") pod \"dashboard-metrics-scraper-5f989dc9cf-zdj6d\" (UID: \"ee98ebcc-4473-4573-a4a4-e4f65da59d9b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zdj6d"
	Oct 18 09:16:27 old-k8s-version-951975 kubelet[718]: I1018 09:16:27.179207     718 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ee98ebcc-4473-4573-a4a4-e4f65da59d9b-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-zdj6d\" (UID: \"ee98ebcc-4473-4573-a4a4-e4f65da59d9b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zdj6d"
	Oct 18 09:16:27 old-k8s-version-951975 kubelet[718]: I1018 09:16:27.179244     718 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpqz5\" (UniqueName: \"kubernetes.io/projected/55b6301a-677b-42eb-90f9-ff3b66ddb759-kube-api-access-lpqz5\") pod \"kubernetes-dashboard-8694d4445c-qms7p\" (UID: \"55b6301a-677b-42eb-90f9-ff3b66ddb759\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-qms7p"
	Oct 18 09:16:27 old-k8s-version-951975 kubelet[718]: I1018 09:16:27.179362     718 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/55b6301a-677b-42eb-90f9-ff3b66ddb759-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-qms7p\" (UID: \"55b6301a-677b-42eb-90f9-ff3b66ddb759\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-qms7p"
	Oct 18 09:16:32 old-k8s-version-951975 kubelet[718]: I1018 09:16:32.382438     718 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-qms7p" podStartSLOduration=0.820473436 podCreationTimestamp="2025-10-18 09:16:27 +0000 UTC" firstStartedPulling="2025-10-18 09:16:27.435627203 +0000 UTC m=+17.268674601" lastFinishedPulling="2025-10-18 09:16:31.99752505 +0000 UTC m=+21.830572495" observedRunningTime="2025-10-18 09:16:32.38212967 +0000 UTC m=+22.215177083" watchObservedRunningTime="2025-10-18 09:16:32.38237133 +0000 UTC m=+22.215418744"
	Oct 18 09:16:34 old-k8s-version-951975 kubelet[718]: I1018 09:16:34.375183     718 scope.go:117] "RemoveContainer" containerID="1bfa96685a18fe9ed455af32503237123856585da9cd413b1048c58c828adbd5"
	Oct 18 09:16:35 old-k8s-version-951975 kubelet[718]: I1018 09:16:35.380075     718 scope.go:117] "RemoveContainer" containerID="1bfa96685a18fe9ed455af32503237123856585da9cd413b1048c58c828adbd5"
	Oct 18 09:16:35 old-k8s-version-951975 kubelet[718]: I1018 09:16:35.380259     718 scope.go:117] "RemoveContainer" containerID="c86035a98d2e811d86f1f11558369ba8ac7d1618e695578b959012bc810d6cfe"
	Oct 18 09:16:35 old-k8s-version-951975 kubelet[718]: E1018 09:16:35.380714     718 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-zdj6d_kubernetes-dashboard(ee98ebcc-4473-4573-a4a4-e4f65da59d9b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zdj6d" podUID="ee98ebcc-4473-4573-a4a4-e4f65da59d9b"
	Oct 18 09:16:36 old-k8s-version-951975 kubelet[718]: I1018 09:16:36.384887     718 scope.go:117] "RemoveContainer" containerID="c86035a98d2e811d86f1f11558369ba8ac7d1618e695578b959012bc810d6cfe"
	Oct 18 09:16:36 old-k8s-version-951975 kubelet[718]: E1018 09:16:36.385218     718 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-zdj6d_kubernetes-dashboard(ee98ebcc-4473-4573-a4a4-e4f65da59d9b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zdj6d" podUID="ee98ebcc-4473-4573-a4a4-e4f65da59d9b"
	Oct 18 09:16:37 old-k8s-version-951975 kubelet[718]: I1018 09:16:37.408684     718 scope.go:117] "RemoveContainer" containerID="c86035a98d2e811d86f1f11558369ba8ac7d1618e695578b959012bc810d6cfe"
	Oct 18 09:16:37 old-k8s-version-951975 kubelet[718]: E1018 09:16:37.409031     718 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-zdj6d_kubernetes-dashboard(ee98ebcc-4473-4573-a4a4-e4f65da59d9b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zdj6d" podUID="ee98ebcc-4473-4573-a4a4-e4f65da59d9b"
	Oct 18 09:16:45 old-k8s-version-951975 kubelet[718]: I1018 09:16:45.407115     718 scope.go:117] "RemoveContainer" containerID="e13798224f38d89856ac0d589f0dbef9694affee05d404ed03bc1423d5b36d66"
	Oct 18 09:16:49 old-k8s-version-951975 kubelet[718]: I1018 09:16:49.277571     718 scope.go:117] "RemoveContainer" containerID="c86035a98d2e811d86f1f11558369ba8ac7d1618e695578b959012bc810d6cfe"
	Oct 18 09:16:49 old-k8s-version-951975 kubelet[718]: I1018 09:16:49.421644     718 scope.go:117] "RemoveContainer" containerID="c86035a98d2e811d86f1f11558369ba8ac7d1618e695578b959012bc810d6cfe"
	Oct 18 09:16:49 old-k8s-version-951975 kubelet[718]: I1018 09:16:49.421892     718 scope.go:117] "RemoveContainer" containerID="f59a4536a6be07648a7d886609196032c6dd09725a1a2250b74c79dd1ca7a6ee"
	Oct 18 09:16:49 old-k8s-version-951975 kubelet[718]: E1018 09:16:49.422263     718 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-zdj6d_kubernetes-dashboard(ee98ebcc-4473-4573-a4a4-e4f65da59d9b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zdj6d" podUID="ee98ebcc-4473-4573-a4a4-e4f65da59d9b"
	Oct 18 09:16:57 old-k8s-version-951975 kubelet[718]: I1018 09:16:57.408932     718 scope.go:117] "RemoveContainer" containerID="f59a4536a6be07648a7d886609196032c6dd09725a1a2250b74c79dd1ca7a6ee"
	Oct 18 09:16:57 old-k8s-version-951975 kubelet[718]: E1018 09:16:57.409272     718 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-zdj6d_kubernetes-dashboard(ee98ebcc-4473-4573-a4a4-e4f65da59d9b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zdj6d" podUID="ee98ebcc-4473-4573-a4a4-e4f65da59d9b"
	Oct 18 09:16:59 old-k8s-version-951975 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 09:16:59 old-k8s-version-951975 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 09:16:59 old-k8s-version-951975 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 18 09:16:59 old-k8s-version-951975 systemd[1]: kubelet.service: Consumed 1.603s CPU time.
	
	
	==> kubernetes-dashboard [1731b366ef3fded158839dbcd6cc44068387d425b2e39024818c85643cff484e] <==
	2025/10/18 09:16:32 Using namespace: kubernetes-dashboard
	2025/10/18 09:16:32 Using in-cluster config to connect to apiserver
	2025/10/18 09:16:32 Using secret token for csrf signing
	2025/10/18 09:16:32 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 09:16:32 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 09:16:32 Successful initial request to the apiserver, version: v1.28.0
	2025/10/18 09:16:32 Generating JWE encryption key
	2025/10/18 09:16:32 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 09:16:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 09:16:32 Initializing JWE encryption key from synchronized object
	2025/10/18 09:16:32 Creating in-cluster Sidecar client
	2025/10/18 09:16:32 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 09:16:32 Serving insecurely on HTTP port: 9090
	2025/10/18 09:17:02 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 09:16:32 Starting overwatch
	
	
	==> storage-provisioner [dd5dcb7d66045a0152ff8c078146a86a98fcc49d2df4f7b6dd15a94d89058078] <==
	I1018 09:16:45.465666       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 09:16:45.475117       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 09:16:45.475168       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1018 09:17:02.876197       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 09:17:02.876383       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"27b4ae7e-91ad-46cf-b758-f945092ba79c", APIVersion:"v1", ResourceVersion:"657", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-951975_903bd012-b300-4dcd-8ed2-63bbd3769fea became leader
	I1018 09:17:02.876453       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-951975_903bd012-b300-4dcd-8ed2-63bbd3769fea!
	I1018 09:17:02.977145       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-951975_903bd012-b300-4dcd-8ed2-63bbd3769fea!
	
	
	==> storage-provisioner [e13798224f38d89856ac0d589f0dbef9694affee05d404ed03bc1423d5b36d66] <==
	I1018 09:16:14.684717       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 09:16:44.690170       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-951975 -n old-k8s-version-951975
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-951975 -n old-k8s-version-951975: exit status 2 (326.714724ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-951975 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-951975
helpers_test.go:243: (dbg) docker inspect old-k8s-version-951975:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d0100f52d1269537aed479fa34a959a6c66c92a27d1fccddcac8f2b32127e866",
	        "Created": "2025-10-18T09:14:48.164862927Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 302921,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T09:16:03.734445588Z",
	            "FinishedAt": "2025-10-18T09:16:02.821932847Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/d0100f52d1269537aed479fa34a959a6c66c92a27d1fccddcac8f2b32127e866/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d0100f52d1269537aed479fa34a959a6c66c92a27d1fccddcac8f2b32127e866/hostname",
	        "HostsPath": "/var/lib/docker/containers/d0100f52d1269537aed479fa34a959a6c66c92a27d1fccddcac8f2b32127e866/hosts",
	        "LogPath": "/var/lib/docker/containers/d0100f52d1269537aed479fa34a959a6c66c92a27d1fccddcac8f2b32127e866/d0100f52d1269537aed479fa34a959a6c66c92a27d1fccddcac8f2b32127e866-json.log",
	        "Name": "/old-k8s-version-951975",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-951975:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-951975",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d0100f52d1269537aed479fa34a959a6c66c92a27d1fccddcac8f2b32127e866",
	                "LowerDir": "/var/lib/docker/overlay2/e7302d3490bbf936dd2f2d552ba2ba9dcf7d4bb0152646a3d6445c572600b324-init/diff:/var/lib/docker/overlay2/76f783f469ac4c930bc111d7df4bd2b3a57bdcd762971c7ce0ba7a7b959771a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e7302d3490bbf936dd2f2d552ba2ba9dcf7d4bb0152646a3d6445c572600b324/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e7302d3490bbf936dd2f2d552ba2ba9dcf7d4bb0152646a3d6445c572600b324/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e7302d3490bbf936dd2f2d552ba2ba9dcf7d4bb0152646a3d6445c572600b324/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-951975",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-951975/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-951975",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-951975",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-951975",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3cd13497a022261452eb5d55c790262e06bba8e434c0d50f8a561ab6c128fa72",
	            "SandboxKey": "/var/run/docker/netns/3cd13497a022",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-951975": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:02:bf:15:d1:69",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "24bc48639b258a05e4ef01c1cdad81fb398d660a6740ed3b45a916093c5c2afe",
	                    "EndpointID": "d60f8d93e6ca94ac31f061dbcf0c07af2906bb8301000001bd6ebc63a7b68d1d",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-951975",
	                        "d0100f52d126"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-951975 -n old-k8s-version-951975
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-951975 -n old-k8s-version-951975: exit status 2 (317.699565ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-951975 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-951975 logs -n 25: (1.158935423s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p enable-default-cni-448954 sudo cat /etc/docker/daemon.json                                                                                                            │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │                     │
	│ ssh     │ -p enable-default-cni-448954 sudo docker system info                                                                                                                     │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │                     │
	│ ssh     │ -p enable-default-cni-448954 sudo systemctl status cri-docker --all --full --no-pager                                                                                    │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │                     │
	│ ssh     │ -p enable-default-cni-448954 sudo systemctl cat cri-docker --no-pager                                                                                                    │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ ssh     │ -p enable-default-cni-448954 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                               │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │                     │
	│ ssh     │ -p enable-default-cni-448954 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                         │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ ssh     │ -p enable-default-cni-448954 sudo cri-dockerd --version                                                                                                                  │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ ssh     │ -p enable-default-cni-448954 sudo systemctl status containerd --all --full --no-pager                                                                                    │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │                     │
	│ ssh     │ -p enable-default-cni-448954 sudo systemctl cat containerd --no-pager                                                                                                    │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ ssh     │ -p enable-default-cni-448954 sudo cat /lib/systemd/system/containerd.service                                                                                             │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ ssh     │ -p enable-default-cni-448954 sudo cat /etc/containerd/config.toml                                                                                                        │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ ssh     │ -p enable-default-cni-448954 sudo containerd config dump                                                                                                                 │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ ssh     │ -p enable-default-cni-448954 sudo systemctl status crio --all --full --no-pager                                                                                          │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ ssh     │ -p enable-default-cni-448954 sudo systemctl cat crio --no-pager                                                                                                          │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ ssh     │ -p enable-default-cni-448954 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ ssh     │ -p enable-default-cni-448954 sudo crio config                                                                                                                            │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ delete  │ -p enable-default-cni-448954                                                                                                                                             │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ delete  │ -p disable-driver-mounts-634520                                                                                                                                          │ disable-driver-mounts-634520 │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ start   │ -p default-k8s-diff-port-986220 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-986220 │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:17 UTC │
	│ addons  │ enable dashboard -p no-preload-031066 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                             │ no-preload-031066            │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ start   │ -p no-preload-031066 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                  │ no-preload-031066            │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-880603 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-880603           │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │                     │
	│ stop    │ -p embed-certs-880603 --alsologtostderr -v=3                                                                                                                             │ embed-certs-880603           │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │                     │
	│ image   │ old-k8s-version-951975 image list --format=json                                                                                                                          │ old-k8s-version-951975       │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ pause   │ -p old-k8s-version-951975 --alsologtostderr -v=1                                                                                                                         │ old-k8s-version-951975       │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 09:16:21
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 09:16:21.259556  309439 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:16:21.259842  309439 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:16:21.259853  309439 out.go:374] Setting ErrFile to fd 2...
	I1018 09:16:21.259859  309439 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:16:21.260111  309439 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-5897/.minikube/bin
	I1018 09:16:21.260632  309439 out.go:368] Setting JSON to false
	I1018 09:16:21.261865  309439 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3529,"bootTime":1760775452,"procs":325,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 09:16:21.261961  309439 start.go:141] virtualization: kvm guest
	I1018 09:16:21.264134  309439 out.go:179] * [no-preload-031066] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 09:16:21.265731  309439 out.go:179]   - MINIKUBE_LOCATION=21767
	I1018 09:16:21.265725  309439 notify.go:220] Checking for updates...
	I1018 09:16:21.268703  309439 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:16:21.270038  309439 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-5897/kubeconfig
	I1018 09:16:21.271373  309439 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-5897/.minikube
	I1018 09:16:21.272816  309439 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 09:16:21.274205  309439 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:16:21.275956  309439 config.go:182] Loaded profile config "no-preload-031066": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:16:21.276446  309439 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:16:21.302079  309439 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 09:16:21.302171  309439 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:16:21.363454  309439 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-18 09:16:21.352641655 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:16:21.363573  309439 docker.go:318] overlay module found
	I1018 09:16:21.365496  309439 out.go:179] * Using the docker driver based on existing profile
	I1018 09:16:21.366846  309439 start.go:305] selected driver: docker
	I1018 09:16:21.366860  309439 start.go:925] validating driver "docker" against &{Name:no-preload-031066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-031066 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:16:21.366946  309439 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:16:21.367537  309439 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:16:21.430714  309439 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-18 09:16:21.420288348 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:16:21.431045  309439 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:16:21.431076  309439 cni.go:84] Creating CNI manager for ""
	I1018 09:16:21.431123  309439 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:16:21.431162  309439 start.go:349] cluster config:
	{Name:no-preload-031066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-031066 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:16:21.433306  309439 out.go:179] * Starting "no-preload-031066" primary control-plane node in "no-preload-031066" cluster
	I1018 09:16:21.434506  309439 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 09:16:21.435855  309439 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 09:16:21.437073  309439 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:16:21.437171  309439 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 09:16:21.437215  309439 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/no-preload-031066/config.json ...
	I1018 09:16:21.437382  309439 cache.go:107] acquiring lock: {Name:mka90e9ba087577c518f2d2789ac53b5d3a7e763 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:16:21.437396  309439 cache.go:107] acquiring lock: {Name:mk6fc1dc569bbb33e36e89f8f90205f595f97590 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:16:21.437429  309439 cache.go:107] acquiring lock: {Name:mk862309f449c155bd44d2ad75f71086b6e84154 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:16:21.437488  309439 cache.go:107] acquiring lock: {Name:mkba01dbd7a5ffa26c612bd6d2ecfdfb06fab7f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:16:21.437517  309439 cache.go:115] /home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1018 09:16:21.437376  309439 cache.go:107] acquiring lock: {Name:mkd7da5cca5b2c7f5a7a2978ccb1f907bf4e999d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:16:21.437529  309439 cache.go:115] /home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1018 09:16:21.437531  309439 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 103.396µs
	I1018 09:16:21.437548  309439 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1018 09:16:21.437540  309439 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 155.136µs
	I1018 09:16:21.437551  309439 cache.go:115] /home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1018 09:16:21.437553  309439 cache.go:115] /home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1018 09:16:21.437556  309439 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1018 09:16:21.437519  309439 cache.go:115] /home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1018 09:16:21.437561  309439 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 199.486µs
	I1018 09:16:21.437564  309439 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 84.165µs
	I1018 09:16:21.437573  309439 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1018 09:16:21.437565  309439 cache.go:107] acquiring lock: {Name:mk207c5d06cdfbb02440711f0747e0524648cf15 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:16:21.437611  309439 cache.go:115] /home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1018 09:16:21.437627  309439 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 65.353µs
	I1018 09:16:21.437636  309439 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1018 09:16:21.437575  309439 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1018 09:16:21.437513  309439 cache.go:107] acquiring lock: {Name:mk4deb8933cd428b15e028b41c12d1c1d0a4c5a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:16:21.437573  309439 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 203.037µs
	I1018 09:16:21.437696  309439 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1018 09:16:21.437553  309439 cache.go:107] acquiring lock: {Name:mkeb58e0ef10b1fdccc29a88361956d4cde72da3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:16:21.437730  309439 cache.go:115] /home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1018 09:16:21.437741  309439 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 240.836µs
	I1018 09:16:21.437753  309439 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1018 09:16:21.437671  309439 cache.go:115] /home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1018 09:16:21.437763  309439 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 319.038µs
	I1018 09:16:21.437774  309439 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21767-5897/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1018 09:16:21.437787  309439 cache.go:87] Successfully saved all images to host disk.
	I1018 09:16:21.460092  309439 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 09:16:21.460113  309439 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 09:16:21.460128  309439 cache.go:232] Successfully downloaded all kic artifacts
	I1018 09:16:21.460160  309439 start.go:360] acquireMachinesLock for no-preload-031066: {Name:mkf2aade90157f4c0d311140fc5fc0e3e0428507 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:16:21.460220  309439 start.go:364] duration metric: took 39.29µs to acquireMachinesLock for "no-preload-031066"
	I1018 09:16:21.460239  309439 start.go:96] Skipping create...Using existing machine configuration
	I1018 09:16:21.460249  309439 fix.go:54] fixHost starting: 
	I1018 09:16:21.460515  309439 cli_runner.go:164] Run: docker container inspect no-preload-031066 --format={{.State.Status}}
	I1018 09:16:21.479263  309439 fix.go:112] recreateIfNeeded on no-preload-031066: state=Stopped err=<nil>
	W1018 09:16:21.479306  309439 fix.go:138] unexpected machine state, will restart: <nil>
	W1018 09:16:18.612194  295389 node_ready.go:57] node "embed-certs-880603" has "Ready":"False" status (will retry)
	W1018 09:16:21.111155  295389 node_ready.go:57] node "embed-certs-880603" has "Ready":"False" status (will retry)
	W1018 09:16:19.794473  302609 pod_ready.go:104] pod "coredns-5dd5756b68-gwttp" is not "Ready", error: <nil>
	W1018 09:16:22.294004  302609 pod_ready.go:104] pod "coredns-5dd5756b68-gwttp" is not "Ready", error: <nil>
	I1018 09:16:19.783671  307829 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-5897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-986220:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.490229561s)
	I1018 09:16:19.783707  307829 kic.go:203] duration metric: took 4.490410558s to extract preloaded images to volume ...
	W1018 09:16:19.783815  307829 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1018 09:16:19.783854  307829 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1018 09:16:19.783901  307829 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 09:16:19.847832  307829 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-986220 --name default-k8s-diff-port-986220 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-986220 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-986220 --network default-k8s-diff-port-986220 --ip 192.168.94.2 --volume default-k8s-diff-port-986220:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 09:16:20.166578  307829 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-986220 --format={{.State.Running}}
	I1018 09:16:20.186662  307829 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-986220 --format={{.State.Status}}
	I1018 09:16:20.206875  307829 cli_runner.go:164] Run: docker exec default-k8s-diff-port-986220 stat /var/lib/dpkg/alternatives/iptables
	I1018 09:16:20.258252  307829 oci.go:144] the created container "default-k8s-diff-port-986220" has a running status.
	I1018 09:16:20.258285  307829 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21767-5897/.minikube/machines/default-k8s-diff-port-986220/id_rsa...
	I1018 09:16:20.304155  307829 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21767-5897/.minikube/machines/default-k8s-diff-port-986220/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 09:16:20.339663  307829 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-986220 --format={{.State.Status}}
	I1018 09:16:20.359254  307829 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 09:16:20.359276  307829 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-986220 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 09:16:20.402369  307829 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-986220 --format={{.State.Status}}
	I1018 09:16:20.428033  307829 machine.go:93] provisionDockerMachine start ...
	I1018 09:16:20.428144  307829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-986220
	I1018 09:16:20.449570  307829 main.go:141] libmachine: Using SSH client type: native
	I1018 09:16:20.449929  307829 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1018 09:16:20.449948  307829 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:16:20.450769  307829 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36384->127.0.0.1:33108: read: connection reset by peer
	I1018 09:16:23.589648  307829 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-986220
	
	I1018 09:16:23.589683  307829 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-986220"
	I1018 09:16:23.589753  307829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-986220
	I1018 09:16:23.609951  307829 main.go:141] libmachine: Using SSH client type: native
	I1018 09:16:23.610242  307829 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1018 09:16:23.610262  307829 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-986220 && echo "default-k8s-diff-port-986220" | sudo tee /etc/hostname
	I1018 09:16:23.757907  307829 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-986220
	
	I1018 09:16:23.757979  307829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-986220
	I1018 09:16:23.777613  307829 main.go:141] libmachine: Using SSH client type: native
	I1018 09:16:23.777861  307829 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1018 09:16:23.777889  307829 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-986220' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-986220/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-986220' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:16:23.916520  307829 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:16:23.916547  307829 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-5897/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-5897/.minikube}
	I1018 09:16:23.916591  307829 ubuntu.go:190] setting up certificates
	I1018 09:16:23.916606  307829 provision.go:84] configureAuth start
	I1018 09:16:23.916674  307829 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-986220
	I1018 09:16:23.935731  307829 provision.go:143] copyHostCerts
	I1018 09:16:23.935809  307829 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem, removing ...
	I1018 09:16:23.935828  307829 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem
	I1018 09:16:23.935910  307829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem (1078 bytes)
	I1018 09:16:23.936072  307829 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem, removing ...
	I1018 09:16:23.936088  307829 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem
	I1018 09:16:23.936136  307829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem (1123 bytes)
	I1018 09:16:23.936218  307829 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem, removing ...
	I1018 09:16:23.936228  307829 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem
	I1018 09:16:23.936286  307829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem (1675 bytes)
	I1018 09:16:23.936407  307829 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-986220 san=[127.0.0.1 192.168.94.2 default-k8s-diff-port-986220 localhost minikube]
	I1018 09:16:24.096815  307829 provision.go:177] copyRemoteCerts
	I1018 09:16:24.096879  307829 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:16:24.096916  307829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-986220
	I1018 09:16:24.116412  307829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/default-k8s-diff-port-986220/id_rsa Username:docker}
	I1018 09:16:24.215442  307829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:16:24.236994  307829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1018 09:16:24.256007  307829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 09:16:24.275068  307829 provision.go:87] duration metric: took 358.446736ms to configureAuth
	I1018 09:16:24.275096  307829 ubuntu.go:206] setting minikube options for container-runtime
	I1018 09:16:24.275276  307829 config.go:182] Loaded profile config "default-k8s-diff-port-986220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:16:24.275405  307829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-986220
	I1018 09:16:24.295823  307829 main.go:141] libmachine: Using SSH client type: native
	I1018 09:16:24.296078  307829 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1018 09:16:24.296097  307829 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:16:24.553053  307829 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:16:24.553082  307829 machine.go:96] duration metric: took 4.125023459s to provisionDockerMachine
	I1018 09:16:24.553094  307829 client.go:171] duration metric: took 9.862444073s to LocalClient.Create
	I1018 09:16:24.553114  307829 start.go:167] duration metric: took 9.862511631s to libmachine.API.Create "default-k8s-diff-port-986220"
	I1018 09:16:24.553124  307829 start.go:293] postStartSetup for "default-k8s-diff-port-986220" (driver="docker")
	I1018 09:16:24.553138  307829 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:16:24.553242  307829 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:16:24.553291  307829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-986220
	I1018 09:16:24.572128  307829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/default-k8s-diff-port-986220/id_rsa Username:docker}
	I1018 09:16:24.672893  307829 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:16:24.676680  307829 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 09:16:24.676709  307829 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 09:16:24.676719  307829 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-5897/.minikube/addons for local assets ...
	I1018 09:16:24.676777  307829 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-5897/.minikube/files for local assets ...
	I1018 09:16:24.676867  307829 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem -> 93942.pem in /etc/ssl/certs
	I1018 09:16:24.676983  307829 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:16:24.686464  307829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem --> /etc/ssl/certs/93942.pem (1708 bytes)
	I1018 09:16:24.708946  307829 start.go:296] duration metric: took 155.806152ms for postStartSetup
	I1018 09:16:24.709434  307829 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-986220
	I1018 09:16:24.729672  307829 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/config.json ...
	I1018 09:16:24.729981  307829 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:16:24.730033  307829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-986220
	I1018 09:16:24.749138  307829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/default-k8s-diff-port-986220/id_rsa Username:docker}
	I1018 09:16:24.846031  307829 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 09:16:24.851592  307829 start.go:128] duration metric: took 10.163744383s to createHost
	I1018 09:16:24.851619  307829 start.go:83] releasing machines lock for "default-k8s-diff-port-986220", held for 10.163895422s
	I1018 09:16:24.851680  307829 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-986220
	I1018 09:16:24.871446  307829 ssh_runner.go:195] Run: cat /version.json
	I1018 09:16:24.871492  307829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-986220
	I1018 09:16:24.871527  307829 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:16:24.871607  307829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-986220
	I1018 09:16:24.892448  307829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/default-k8s-diff-port-986220/id_rsa Username:docker}
	I1018 09:16:24.892466  307829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/default-k8s-diff-port-986220/id_rsa Username:docker}
	I1018 09:16:25.047556  307829 ssh_runner.go:195] Run: systemctl --version
	I1018 09:16:25.056042  307829 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:16:25.095154  307829 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:16:25.100317  307829 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:16:25.100404  307829 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:16:25.135472  307829 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1018 09:16:25.135500  307829 start.go:495] detecting cgroup driver to use...
	I1018 09:16:25.135533  307829 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 09:16:25.135579  307829 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:16:25.163992  307829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:16:25.179086  307829 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:16:25.179151  307829 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:16:25.197806  307829 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:16:25.218805  307829 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:16:25.310534  307829 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:16:25.402675  307829 docker.go:234] disabling docker service ...
	I1018 09:16:25.402736  307829 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:16:25.424774  307829 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:16:25.439087  307829 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:16:25.533380  307829 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:16:25.620820  307829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:16:25.636909  307829 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:16:25.654401  307829 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 09:16:25.654463  307829 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:16:25.667479  307829 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 09:16:25.667553  307829 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:16:25.678806  307829 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:16:25.692980  307829 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:16:25.704763  307829 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:16:25.715821  307829 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:16:25.727218  307829 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:16:25.742569  307829 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:16:25.752002  307829 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:16:25.760372  307829 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:16:25.768535  307829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:16:25.856991  307829 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:16:25.969026  307829 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:16:25.969096  307829 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:16:25.974137  307829 start.go:563] Will wait 60s for crictl version
	I1018 09:16:25.974200  307829 ssh_runner.go:195] Run: which crictl
	I1018 09:16:25.978663  307829 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 09:16:26.006946  307829 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 09:16:26.007028  307829 ssh_runner.go:195] Run: crio --version
	I1018 09:16:26.037634  307829 ssh_runner.go:195] Run: crio --version
	I1018 09:16:26.069278  307829 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 09:16:21.481637  309439 out.go:252] * Restarting existing docker container for "no-preload-031066" ...
	I1018 09:16:21.481720  309439 cli_runner.go:164] Run: docker start no-preload-031066
	I1018 09:16:21.732544  309439 cli_runner.go:164] Run: docker container inspect no-preload-031066 --format={{.State.Status}}
	I1018 09:16:21.752925  309439 kic.go:430] container "no-preload-031066" state is running.
	I1018 09:16:21.753416  309439 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-031066
	I1018 09:16:21.774132  309439 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/no-preload-031066/config.json ...
	I1018 09:16:21.774479  309439 machine.go:93] provisionDockerMachine start ...
	I1018 09:16:21.774570  309439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-031066
	I1018 09:16:21.795137  309439 main.go:141] libmachine: Using SSH client type: native
	I1018 09:16:21.795458  309439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1018 09:16:21.795477  309439 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:16:21.796069  309439 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52300->127.0.0.1:33113: read: connection reset by peer
	I1018 09:16:24.935395  309439 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-031066
	
	I1018 09:16:24.935424  309439 ubuntu.go:182] provisioning hostname "no-preload-031066"
	I1018 09:16:24.935491  309439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-031066
	I1018 09:16:24.955546  309439 main.go:141] libmachine: Using SSH client type: native
	I1018 09:16:24.955764  309439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1018 09:16:24.955779  309439 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-031066 && echo "no-preload-031066" | sudo tee /etc/hostname
	I1018 09:16:25.103825  309439 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-031066
	
	I1018 09:16:25.103917  309439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-031066
	I1018 09:16:25.127296  309439 main.go:141] libmachine: Using SSH client type: native
	I1018 09:16:25.127611  309439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1018 09:16:25.127652  309439 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-031066' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-031066/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-031066' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:16:25.274198  309439 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:16:25.274224  309439 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-5897/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-5897/.minikube}
	I1018 09:16:25.274267  309439 ubuntu.go:190] setting up certificates
	I1018 09:16:25.274280  309439 provision.go:84] configureAuth start
	I1018 09:16:25.274327  309439 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-031066
	I1018 09:16:25.295152  309439 provision.go:143] copyHostCerts
	I1018 09:16:25.295209  309439 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem, removing ...
	I1018 09:16:25.295222  309439 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem
	I1018 09:16:25.295281  309439 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem (1078 bytes)
	I1018 09:16:25.295411  309439 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem, removing ...
	I1018 09:16:25.295423  309439 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem
	I1018 09:16:25.295448  309439 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem (1123 bytes)
	I1018 09:16:25.295525  309439 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem, removing ...
	I1018 09:16:25.295533  309439 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem
	I1018 09:16:25.295554  309439 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem (1675 bytes)
	I1018 09:16:25.295606  309439 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem org=jenkins.no-preload-031066 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-031066]
	I1018 09:16:25.425118  309439 provision.go:177] copyRemoteCerts
	I1018 09:16:25.425176  309439 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:16:25.425241  309439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-031066
	I1018 09:16:25.445036  309439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/no-preload-031066/id_rsa Username:docker}
	I1018 09:16:25.543837  309439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:16:25.565616  309439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 09:16:25.589029  309439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 09:16:25.608484  309439 provision.go:87] duration metric: took 334.191405ms to configureAuth
	I1018 09:16:25.608516  309439 ubuntu.go:206] setting minikube options for container-runtime
	I1018 09:16:25.608733  309439 config.go:182] Loaded profile config "no-preload-031066": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:16:25.608856  309439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-031066
	I1018 09:16:25.632064  309439 main.go:141] libmachine: Using SSH client type: native
	I1018 09:16:25.632401  309439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1018 09:16:25.632427  309439 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:16:25.957864  309439 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:16:25.957894  309439 machine.go:96] duration metric: took 4.183393935s to provisionDockerMachine
	I1018 09:16:25.957909  309439 start.go:293] postStartSetup for "no-preload-031066" (driver="docker")
	I1018 09:16:25.957922  309439 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:16:25.957977  309439 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:16:25.958020  309439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-031066
	I1018 09:16:25.980314  309439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/no-preload-031066/id_rsa Username:docker}
	I1018 09:16:26.082603  309439 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:16:26.086751  309439 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 09:16:26.086778  309439 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 09:16:26.086789  309439 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-5897/.minikube/addons for local assets ...
	I1018 09:16:26.086848  309439 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-5897/.minikube/files for local assets ...
	I1018 09:16:26.086937  309439 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem -> 93942.pem in /etc/ssl/certs
	I1018 09:16:26.087048  309439 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:16:26.096192  309439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem --> /etc/ssl/certs/93942.pem (1708 bytes)
	I1018 09:16:26.115776  309439 start.go:296] duration metric: took 157.850809ms for postStartSetup
	I1018 09:16:26.115859  309439 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:16:26.115914  309439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-031066
	I1018 09:16:26.137971  309439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/no-preload-031066/id_rsa Username:docker}
	I1018 09:16:26.234585  309439 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 09:16:26.239791  309439 fix.go:56] duration metric: took 4.779536543s for fixHost
	I1018 09:16:26.239820  309439 start.go:83] releasing machines lock for "no-preload-031066", held for 4.779588591s
	I1018 09:16:26.239895  309439 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-031066
	I1018 09:16:26.259555  309439 ssh_runner.go:195] Run: cat /version.json
	W1018 09:16:23.111428  295389 node_ready.go:57] node "embed-certs-880603" has "Ready":"False" status (will retry)
	W1018 09:16:25.112093  295389 node_ready.go:57] node "embed-certs-880603" has "Ready":"False" status (will retry)
	I1018 09:16:26.259669  309439 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:16:26.259627  309439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-031066
	I1018 09:16:26.259792  309439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-031066
	I1018 09:16:26.281760  309439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/no-preload-031066/id_rsa Username:docker}
	I1018 09:16:26.281753  309439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/no-preload-031066/id_rsa Username:docker}
	I1018 09:16:26.435671  309439 ssh_runner.go:195] Run: systemctl --version
	I1018 09:16:26.443263  309439 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:16:26.486908  309439 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:16:26.492101  309439 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:16:26.492171  309439 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:16:26.501157  309439 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 09:16:26.501179  309439 start.go:495] detecting cgroup driver to use...
	I1018 09:16:26.501207  309439 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 09:16:26.501261  309439 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:16:26.517601  309439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:16:26.535073  309439 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:16:26.535137  309439 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:16:26.559014  309439 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:16:26.573192  309439 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:16:26.664628  309439 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:16:26.753369  309439 docker.go:234] disabling docker service ...
	I1018 09:16:26.753441  309439 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:16:26.769930  309439 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:16:26.784250  309439 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:16:26.875825  309439 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:16:26.963494  309439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:16:26.977661  309439 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:16:26.995292  309439 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 09:16:26.995366  309439 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:16:27.005257  309439 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 09:16:27.005335  309439 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:16:27.015687  309439 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:16:27.026502  309439 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:16:27.037104  309439 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:16:27.046231  309439 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:16:27.056592  309439 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:16:27.066210  309439 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:16:27.076520  309439 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:16:27.086299  309439 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:16:27.099809  309439 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:16:27.202216  309439 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:16:27.320390  309439 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:16:27.320456  309439 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:16:27.325144  309439 start.go:563] Will wait 60s for crictl version
	I1018 09:16:27.325213  309439 ssh_runner.go:195] Run: which crictl
	I1018 09:16:27.329944  309439 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 09:16:27.360229  309439 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 09:16:27.360337  309439 ssh_runner.go:195] Run: crio --version
	I1018 09:16:27.392843  309439 ssh_runner.go:195] Run: crio --version
	I1018 09:16:27.430211  309439 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 09:16:26.070774  307829 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-986220 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:16:26.091069  307829 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1018 09:16:26.095294  307829 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:16:26.106817  307829 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-986220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-986220 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:16:26.106953  307829 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:16:26.107001  307829 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:16:26.146050  307829 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:16:26.146071  307829 crio.go:433] Images already preloaded, skipping extraction
	I1018 09:16:26.146117  307829 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:16:26.175872  307829 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:16:26.175899  307829 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:16:26.175908  307829 kubeadm.go:934] updating node { 192.168.94.2 8444 v1.34.1 crio true true} ...
	I1018 09:16:26.176038  307829 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-986220 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-986220 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:16:26.176148  307829 ssh_runner.go:195] Run: crio config
	I1018 09:16:26.227370  307829 cni.go:84] Creating CNI manager for ""
	I1018 09:16:26.227396  307829 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:16:26.227416  307829 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 09:16:26.227445  307829 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-986220 NodeName:default-k8s-diff-port-986220 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:16:26.227594  307829 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-986220"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:16:26.227669  307829 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 09:16:26.236922  307829 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:16:26.236985  307829 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:16:26.246249  307829 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1018 09:16:26.261061  307829 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:16:26.281576  307829 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1018 09:16:26.296929  307829 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1018 09:16:26.300975  307829 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:16:26.313470  307829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:16:26.402470  307829 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:16:26.432058  307829 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220 for IP: 192.168.94.2
	I1018 09:16:26.432089  307829 certs.go:195] generating shared ca certs ...
	I1018 09:16:26.432109  307829 certs.go:227] acquiring lock for ca certs: {Name:mk550b60d986fbbdf7b5e0015c56234b739f3162 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:16:26.432273  307829 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-5897/.minikube/ca.key
	I1018 09:16:26.432354  307829 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.key
	I1018 09:16:26.432374  307829 certs.go:257] generating profile certs ...
	I1018 09:16:26.432456  307829 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/client.key
	I1018 09:16:26.432479  307829 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/client.crt with IP's: []
	I1018 09:16:26.858948  307829 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/client.crt ...
	I1018 09:16:26.858974  307829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/client.crt: {Name:mk51c8869bcfadfee754b4430b46c6f8826cd48e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:16:26.859138  307829 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/client.key ...
	I1018 09:16:26.859151  307829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/client.key: {Name:mk25866fa200b9b02b356bf6c37bf61a8173ffbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:16:26.859263  307829 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/apiserver.key.6dd2aec8
	I1018 09:16:26.859285  307829 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/apiserver.crt.6dd2aec8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1018 09:16:27.395262  307829 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/apiserver.crt.6dd2aec8 ...
	I1018 09:16:27.395288  307829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/apiserver.crt.6dd2aec8: {Name:mk6e21b854f39a72826bd85be5ec5fc298b199fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:16:27.395475  307829 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/apiserver.key.6dd2aec8 ...
	I1018 09:16:27.395491  307829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/apiserver.key.6dd2aec8: {Name:mk0894105faa3c087ffd9c9fdc31379b6526b690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:16:27.395577  307829 certs.go:382] copying /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/apiserver.crt.6dd2aec8 -> /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/apiserver.crt
	I1018 09:16:27.395651  307829 certs.go:386] copying /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/apiserver.key.6dd2aec8 -> /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/apiserver.key
	I1018 09:16:27.395705  307829 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/proxy-client.key
	I1018 09:16:27.395722  307829 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/proxy-client.crt with IP's: []
	I1018 09:16:27.602598  307829 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/proxy-client.crt ...
	I1018 09:16:27.602624  307829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/proxy-client.crt: {Name:mk8306903932dd1bb11b8ea9409214667367047c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:16:27.602816  307829 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/proxy-client.key ...
	I1018 09:16:27.602835  307829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/proxy-client.key: {Name:mkd19044a8fb32eff2e080ea7a1555b5849cc3b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:16:27.603059  307829 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/9394.pem (1338 bytes)
	W1018 09:16:27.603102  307829 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-5897/.minikube/certs/9394_empty.pem, impossibly tiny 0 bytes
	I1018 09:16:27.603119  307829 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 09:16:27.603157  307829 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:16:27.603187  307829 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:16:27.603220  307829 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem (1675 bytes)
	I1018 09:16:27.603272  307829 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem (1708 bytes)
	I1018 09:16:27.603874  307829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:16:27.624224  307829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 09:16:27.643202  307829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:16:27.667310  307829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 09:16:27.687198  307829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1018 09:16:27.706826  307829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 09:16:27.726322  307829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:16:27.745966  307829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 09:16:27.766635  307829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:16:27.789853  307829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/certs/9394.pem --> /usr/share/ca-certificates/9394.pem (1338 bytes)
	I1018 09:16:27.812727  307829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem --> /usr/share/ca-certificates/93942.pem (1708 bytes)
	I1018 09:16:27.839899  307829 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:16:27.854585  307829 ssh_runner.go:195] Run: openssl version
	I1018 09:16:27.862043  307829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:16:27.871459  307829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:16:27.875984  307829 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:16:27.876057  307829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:16:27.912565  307829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:16:27.923183  307829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9394.pem && ln -fs /usr/share/ca-certificates/9394.pem /etc/ssl/certs/9394.pem"
	I1018 09:16:27.932933  307829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9394.pem
	I1018 09:16:27.937886  307829 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 08:35 /usr/share/ca-certificates/9394.pem
	I1018 09:16:27.937949  307829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9394.pem
	I1018 09:16:27.977776  307829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9394.pem /etc/ssl/certs/51391683.0"
	I1018 09:16:27.988396  307829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93942.pem && ln -fs /usr/share/ca-certificates/93942.pem /etc/ssl/certs/93942.pem"
	I1018 09:16:27.997972  307829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93942.pem
	I1018 09:16:28.002236  307829 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 08:35 /usr/share/ca-certificates/93942.pem
	I1018 09:16:28.002294  307829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93942.pem
	I1018 09:16:28.040608  307829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93942.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:16:28.051449  307829 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:16:28.055923  307829 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 09:16:28.055980  307829 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-986220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-986220 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:16:28.056051  307829 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:16:28.056119  307829 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:16:28.089132  307829 cri.go:89] found id: ""
	I1018 09:16:28.089192  307829 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:16:28.099177  307829 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 09:16:28.109267  307829 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 09:16:28.109329  307829 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 09:16:28.120642  307829 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 09:16:28.120666  307829 kubeadm.go:157] found existing configuration files:
	
	I1018 09:16:28.120718  307829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1018 09:16:28.131668  307829 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 09:16:28.131734  307829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 09:16:28.142231  307829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1018 09:16:28.155016  307829 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 09:16:28.155078  307829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 09:16:28.166186  307829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1018 09:16:28.177468  307829 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 09:16:28.177540  307829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 09:16:28.189027  307829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1018 09:16:28.199965  307829 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 09:16:28.200051  307829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 09:16:28.209045  307829 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 09:16:28.261581  307829 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 09:16:28.261670  307829 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 09:16:28.299228  307829 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 09:16:28.299358  307829 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1018 09:16:28.299410  307829 kubeadm.go:318] OS: Linux
	I1018 09:16:28.299478  307829 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 09:16:28.299612  307829 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 09:16:28.299657  307829 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 09:16:28.299700  307829 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 09:16:28.299742  307829 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 09:16:28.299787  307829 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 09:16:28.299829  307829 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 09:16:28.299868  307829 kubeadm.go:318] CGROUPS_IO: enabled
	I1018 09:16:28.395707  307829 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 09:16:28.395841  307829 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 09:16:28.395964  307829 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 09:16:28.413235  307829 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1018 09:16:24.295019  302609 pod_ready.go:104] pod "coredns-5dd5756b68-gwttp" is not "Ready", error: <nil>
	W1018 09:16:26.793721  302609 pod_ready.go:104] pod "coredns-5dd5756b68-gwttp" is not "Ready", error: <nil>
	I1018 09:16:27.431467  309439 cli_runner.go:164] Run: docker network inspect no-preload-031066 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:16:27.452092  309439 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1018 09:16:27.456746  309439 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:16:27.467844  309439 kubeadm.go:883] updating cluster {Name:no-preload-031066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-031066 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:16:27.467966  309439 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:16:27.468011  309439 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:16:27.503028  309439 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:16:27.503054  309439 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:16:27.503062  309439 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1018 09:16:27.503150  309439 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-031066 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-031066 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:16:27.503211  309439 ssh_runner.go:195] Run: crio config
	I1018 09:16:27.551972  309439 cni.go:84] Creating CNI manager for ""
	I1018 09:16:27.552003  309439 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:16:27.552027  309439 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 09:16:27.552059  309439 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-031066 NodeName:no-preload-031066 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:16:27.552228  309439 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-031066"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:16:27.552303  309439 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 09:16:27.561492  309439 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:16:27.561566  309439 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:16:27.570137  309439 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1018 09:16:27.584540  309439 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:16:27.598223  309439 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1018 09:16:27.612505  309439 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1018 09:16:27.616378  309439 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:16:27.628297  309439 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:16:27.719096  309439 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:16:27.742152  309439 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/no-preload-031066 for IP: 192.168.85.2
	I1018 09:16:27.742182  309439 certs.go:195] generating shared ca certs ...
	I1018 09:16:27.742204  309439 certs.go:227] acquiring lock for ca certs: {Name:mk550b60d986fbbdf7b5e0015c56234b739f3162 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:16:27.742412  309439 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-5897/.minikube/ca.key
	I1018 09:16:27.742502  309439 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.key
	I1018 09:16:27.742521  309439 certs.go:257] generating profile certs ...
	I1018 09:16:27.742635  309439 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/no-preload-031066/client.key
	I1018 09:16:27.742703  309439 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/no-preload-031066/apiserver.key.5b17cd89
	I1018 09:16:27.742770  309439 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/no-preload-031066/proxy-client.key
	I1018 09:16:27.742919  309439 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/9394.pem (1338 bytes)
	W1018 09:16:27.742965  309439 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-5897/.minikube/certs/9394_empty.pem, impossibly tiny 0 bytes
	I1018 09:16:27.742982  309439 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 09:16:27.743018  309439 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:16:27.743053  309439 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:16:27.743084  309439 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem (1675 bytes)
	I1018 09:16:27.743146  309439 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem (1708 bytes)
	I1018 09:16:27.744065  309439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:16:27.766446  309439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 09:16:27.789662  309439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:16:27.810502  309439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 09:16:27.837044  309439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/no-preload-031066/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1018 09:16:27.858195  309439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/no-preload-031066/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 09:16:27.878029  309439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/no-preload-031066/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:16:27.898104  309439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/no-preload-031066/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 09:16:27.918370  309439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:16:27.938784  309439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/certs/9394.pem --> /usr/share/ca-certificates/9394.pem (1338 bytes)
	I1018 09:16:27.959384  309439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem --> /usr/share/ca-certificates/93942.pem (1708 bytes)
	I1018 09:16:27.979401  309439 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:16:27.994769  309439 ssh_runner.go:195] Run: openssl version
	I1018 09:16:28.001920  309439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93942.pem && ln -fs /usr/share/ca-certificates/93942.pem /etc/ssl/certs/93942.pem"
	I1018 09:16:28.011616  309439 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93942.pem
	I1018 09:16:28.015846  309439 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 08:35 /usr/share/ca-certificates/93942.pem
	I1018 09:16:28.015902  309439 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93942.pem
	I1018 09:16:28.055574  309439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93942.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:16:28.064720  309439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:16:28.074630  309439 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:16:28.079603  309439 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:16:28.079670  309439 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:16:28.127275  309439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:16:28.140151  309439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9394.pem && ln -fs /usr/share/ca-certificates/9394.pem /etc/ssl/certs/9394.pem"
	I1018 09:16:28.152584  309439 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9394.pem
	I1018 09:16:28.158002  309439 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 08:35 /usr/share/ca-certificates/9394.pem
	I1018 09:16:28.158067  309439 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9394.pem
	I1018 09:16:28.211197  309439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9394.pem /etc/ssl/certs/51391683.0"
	I1018 09:16:28.220083  309439 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:16:28.224791  309439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 09:16:28.278506  309439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 09:16:28.328610  309439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 09:16:28.392663  309439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 09:16:28.455567  309439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 09:16:28.519223  309439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 09:16:28.580698  309439 kubeadm.go:400] StartCluster: {Name:no-preload-031066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-031066 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:16:28.580833  309439 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:16:28.580901  309439 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:16:28.624088  309439 cri.go:89] found id: "153dd41ff60f495d247d4bd42054dd9255c2fe5ccbc173f31021152a50b30308"
	I1018 09:16:28.624113  309439 cri.go:89] found id: "b51a1224ef6b876bc35ce20f2366f94525e300e4432dff8348abbde915ade5af"
	I1018 09:16:28.624118  309439 cri.go:89] found id: "62682de07bbfeb0d0f0c6405121566236410d571314651f369c15f65938b548a"
	I1018 09:16:28.624125  309439 cri.go:89] found id: "db536597b2746191742cfa1b8df28f2fe3935b9a553d5543f993db2773c9f6a1"
	I1018 09:16:28.624129  309439 cri.go:89] found id: ""
	I1018 09:16:28.624177  309439 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 09:16:28.642550  309439 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:16:28Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:16:28.642622  309439 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:16:28.658418  309439 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 09:16:28.658440  309439 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 09:16:28.658714  309439 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 09:16:28.670518  309439 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 09:16:28.671730  309439 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-031066" does not appear in /home/jenkins/minikube-integration/21767-5897/kubeconfig
	I1018 09:16:28.672554  309439 kubeconfig.go:62] /home/jenkins/minikube-integration/21767-5897/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-031066" cluster setting kubeconfig missing "no-preload-031066" context setting]
	I1018 09:16:28.673952  309439 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/kubeconfig: {Name:mkbdf22e0f6c3f9f36e4cde3352b620d43ace448 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:16:28.676681  309439 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 09:16:28.689778  309439 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1018 09:16:28.689902  309439 kubeadm.go:601] duration metric: took 31.455758ms to restartPrimaryControlPlane
	I1018 09:16:28.689919  309439 kubeadm.go:402] duration metric: took 109.246641ms to StartCluster
	I1018 09:16:28.689940  309439 settings.go:142] acquiring lock: {Name:mk177870d6cf7000f95346d8b9c104ade730278a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:16:28.690009  309439 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-5897/kubeconfig
	I1018 09:16:28.692230  309439 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/kubeconfig: {Name:mkbdf22e0f6c3f9f36e4cde3352b620d43ace448 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:16:28.692547  309439 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:16:28.692792  309439 config.go:182] Loaded profile config "no-preload-031066": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:16:28.692794  309439 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 09:16:28.692955  309439 addons.go:69] Setting storage-provisioner=true in profile "no-preload-031066"
	I1018 09:16:28.692978  309439 addons.go:238] Setting addon storage-provisioner=true in "no-preload-031066"
	I1018 09:16:28.692975  309439 addons.go:69] Setting dashboard=true in profile "no-preload-031066"
	I1018 09:16:28.692996  309439 addons.go:69] Setting default-storageclass=true in profile "no-preload-031066"
	I1018 09:16:28.693007  309439 addons.go:238] Setting addon dashboard=true in "no-preload-031066"
	I1018 09:16:28.693015  309439 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-031066"
	W1018 09:16:28.693018  309439 addons.go:247] addon dashboard should already be in state true
	I1018 09:16:28.693055  309439 host.go:66] Checking if "no-preload-031066" exists ...
	I1018 09:16:28.693384  309439 cli_runner.go:164] Run: docker container inspect no-preload-031066 --format={{.State.Status}}
	W1018 09:16:28.692987  309439 addons.go:247] addon storage-provisioner should already be in state true
	I1018 09:16:28.693557  309439 host.go:66] Checking if "no-preload-031066" exists ...
	I1018 09:16:28.693612  309439 cli_runner.go:164] Run: docker container inspect no-preload-031066 --format={{.State.Status}}
	I1018 09:16:28.694012  309439 cli_runner.go:164] Run: docker container inspect no-preload-031066 --format={{.State.Status}}
	I1018 09:16:28.694776  309439 out.go:179] * Verifying Kubernetes components...
	I1018 09:16:28.696220  309439 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:16:28.724766  309439 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1018 09:16:28.726175  309439 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1018 09:16:28.727367  309439 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1018 09:16:28.727390  309439 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1018 09:16:28.727455  309439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-031066
	I1018 09:16:28.728591  309439 addons.go:238] Setting addon default-storageclass=true in "no-preload-031066"
	W1018 09:16:28.728613  309439 addons.go:247] addon default-storageclass should already be in state true
	I1018 09:16:28.728642  309439 host.go:66] Checking if "no-preload-031066" exists ...
	I1018 09:16:28.729157  309439 cli_runner.go:164] Run: docker container inspect no-preload-031066 --format={{.State.Status}}
	I1018 09:16:28.730548  309439 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:16:28.416330  307829 out.go:252]   - Generating certificates and keys ...
	I1018 09:16:28.416469  307829 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 09:16:28.416585  307829 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 09:16:28.961544  307829 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 09:16:29.130817  307829 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 09:16:28.733826  309439 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:16:28.733946  309439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 09:16:28.734145  309439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-031066
	I1018 09:16:28.758429  309439 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 09:16:28.758462  309439 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 09:16:28.758530  309439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-031066
	I1018 09:16:28.765380  309439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/no-preload-031066/id_rsa Username:docker}
	I1018 09:16:28.782447  309439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/no-preload-031066/id_rsa Username:docker}
	I1018 09:16:28.799019  309439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/no-preload-031066/id_rsa Username:docker}
	I1018 09:16:28.912215  309439 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:16:28.934701  309439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:16:28.935357  309439 node_ready.go:35] waiting up to 6m0s for node "no-preload-031066" to be "Ready" ...
	I1018 09:16:28.970747  309439 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1018 09:16:28.970915  309439 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1018 09:16:28.972898  309439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 09:16:29.005063  309439 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1018 09:16:29.005087  309439 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1018 09:16:29.060862  309439 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1018 09:16:29.060897  309439 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1018 09:16:29.081097  309439 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1018 09:16:29.081122  309439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1018 09:16:29.100999  309439 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1018 09:16:29.101045  309439 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1018 09:16:29.120688  309439 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1018 09:16:29.120720  309439 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1018 09:16:29.139590  309439 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1018 09:16:29.139620  309439 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1018 09:16:29.157828  309439 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1018 09:16:29.157857  309439 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1018 09:16:29.177540  309439 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 09:16:29.177566  309439 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1018 09:16:29.198120  309439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 09:16:30.672748  309439 node_ready.go:49] node "no-preload-031066" is "Ready"
	I1018 09:16:30.672787  309439 node_ready.go:38] duration metric: took 1.737388567s for node "no-preload-031066" to be "Ready" ...
	I1018 09:16:30.672804  309439 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:16:30.672858  309439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:16:31.507012  309439 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.572254383s)
	I1018 09:16:31.507099  309439 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.534180069s)
	I1018 09:16:31.507726  309439 api_server.go:72] duration metric: took 2.815152526s to wait for apiserver process to appear ...
	I1018 09:16:31.507747  309439 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:16:31.507767  309439 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:16:31.508258  309439 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.309493299s)
	I1018 09:16:31.510608  309439 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-031066 addons enable metrics-server
	
	I1018 09:16:31.515473  309439 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 09:16:31.515509  309439 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 09:16:31.521256  309439 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1018 09:16:27.113789  295389 node_ready.go:57] node "embed-certs-880603" has "Ready":"False" status (will retry)
	W1018 09:16:29.611877  295389 node_ready.go:57] node "embed-certs-880603" has "Ready":"False" status (will retry)
	W1018 09:16:28.804271  302609 pod_ready.go:104] pod "coredns-5dd5756b68-gwttp" is not "Ready", error: <nil>
	W1018 09:16:31.298052  302609 pod_ready.go:104] pod "coredns-5dd5756b68-gwttp" is not "Ready", error: <nil>
	I1018 09:16:29.615880  307829 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 09:16:29.662294  307829 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 09:16:30.234104  307829 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 09:16:30.234392  307829 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-986220 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1018 09:16:30.435950  307829 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 09:16:30.436322  307829 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-986220 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1018 09:16:30.721773  307829 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 09:16:31.077742  307829 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 09:16:31.728841  307829 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 09:16:31.729054  307829 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 09:16:32.282669  307829 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 09:16:32.757782  307829 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 09:16:33.241823  307829 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 09:16:33.509889  307829 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 09:16:33.955012  307829 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 09:16:33.955761  307829 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 09:16:33.959972  307829 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 09:16:33.961487  307829 out.go:252]   - Booting up control plane ...
	I1018 09:16:33.961586  307829 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 09:16:33.961682  307829 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 09:16:33.962289  307829 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 09:16:33.978521  307829 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 09:16:33.979073  307829 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 09:16:33.987745  307829 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 09:16:33.988059  307829 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 09:16:33.988143  307829 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 09:16:34.106714  307829 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 09:16:34.106869  307829 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 09:16:31.522773  309439 addons.go:514] duration metric: took 2.829985828s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1018 09:16:32.008497  309439 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:16:32.013652  309439 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1018 09:16:32.014899  309439 api_server.go:141] control plane version: v1.34.1
	I1018 09:16:32.014936  309439 api_server.go:131] duration metric: took 507.174967ms to wait for apiserver health ...
	I1018 09:16:32.014946  309439 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:16:32.018941  309439 system_pods.go:59] 8 kube-system pods found
	I1018 09:16:32.018978  309439 system_pods.go:61] "coredns-66bc5c9577-h44wj" [0f9ac8bf-4d8f-489f-a5bb-f8ef2d832a89] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:16:32.018993  309439 system_pods.go:61] "etcd-no-preload-031066" [46ee9eac-4087-442e-855b-50a8b65b06df] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 09:16:32.019001  309439 system_pods.go:61] "kindnet-k7m9t" [08c34b72-06a7-4a73-b703-ce61dbf3a37f] Running
	I1018 09:16:32.019011  309439 system_pods.go:61] "kube-apiserver-no-preload-031066" [7b20717e-d3b8-4f72-9c92-04c74b236964] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 09:16:32.019019  309439 system_pods.go:61] "kube-controller-manager-no-preload-031066" [e8145322-b25f-40ec-aa8f-39b64900226c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 09:16:32.019025  309439 system_pods.go:61] "kube-proxy-jr5qn" [1ae92f3f-9c07-4fb0-8334-549bfd4cac76] Running
	I1018 09:16:32.019033  309439 system_pods.go:61] "kube-scheduler-no-preload-031066" [2d6fcc42-b0a0-46d3-8eb1-7408eadd4dc6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 09:16:32.019047  309439 system_pods.go:61] "storage-provisioner" [5b3e8950-c8a2-4205-b3aa-5c48157fc9d1] Running
	I1018 09:16:32.019057  309439 system_pods.go:74] duration metric: took 4.103211ms to wait for pod list to return data ...
	I1018 09:16:32.019071  309439 default_sa.go:34] waiting for default service account to be created ...
	I1018 09:16:32.021957  309439 default_sa.go:45] found service account: "default"
	I1018 09:16:32.021981  309439 default_sa.go:55] duration metric: took 2.904005ms for default service account to be created ...
	I1018 09:16:32.021993  309439 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 09:16:32.025565  309439 system_pods.go:86] 8 kube-system pods found
	I1018 09:16:32.025597  309439 system_pods.go:89] "coredns-66bc5c9577-h44wj" [0f9ac8bf-4d8f-489f-a5bb-f8ef2d832a89] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:16:32.025607  309439 system_pods.go:89] "etcd-no-preload-031066" [46ee9eac-4087-442e-855b-50a8b65b06df] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 09:16:32.025615  309439 system_pods.go:89] "kindnet-k7m9t" [08c34b72-06a7-4a73-b703-ce61dbf3a37f] Running
	I1018 09:16:32.025624  309439 system_pods.go:89] "kube-apiserver-no-preload-031066" [7b20717e-d3b8-4f72-9c92-04c74b236964] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 09:16:32.025633  309439 system_pods.go:89] "kube-controller-manager-no-preload-031066" [e8145322-b25f-40ec-aa8f-39b64900226c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 09:16:32.025650  309439 system_pods.go:89] "kube-proxy-jr5qn" [1ae92f3f-9c07-4fb0-8334-549bfd4cac76] Running
	I1018 09:16:32.025660  309439 system_pods.go:89] "kube-scheduler-no-preload-031066" [2d6fcc42-b0a0-46d3-8eb1-7408eadd4dc6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 09:16:32.025674  309439 system_pods.go:89] "storage-provisioner" [5b3e8950-c8a2-4205-b3aa-5c48157fc9d1] Running
	I1018 09:16:32.025689  309439 system_pods.go:126] duration metric: took 3.68704ms to wait for k8s-apps to be running ...
	I1018 09:16:32.025702  309439 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 09:16:32.025758  309439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:16:32.043072  309439 system_svc.go:56] duration metric: took 17.35931ms WaitForService to wait for kubelet
	I1018 09:16:32.043109  309439 kubeadm.go:586] duration metric: took 3.350536737s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:16:32.043130  309439 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:16:32.047292  309439 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 09:16:32.047324  309439 node_conditions.go:123] node cpu capacity is 8
	I1018 09:16:32.047338  309439 node_conditions.go:105] duration metric: took 4.202003ms to run NodePressure ...
	I1018 09:16:32.047376  309439 start.go:241] waiting for startup goroutines ...
	I1018 09:16:32.047386  309439 start.go:246] waiting for cluster config update ...
	I1018 09:16:32.047405  309439 start.go:255] writing updated cluster config ...
	I1018 09:16:32.047889  309439 ssh_runner.go:195] Run: rm -f paused
	I1018 09:16:32.053052  309439 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:16:32.058191  309439 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-h44wj" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 09:16:34.063715  309439 pod_ready.go:104] pod "coredns-66bc5c9577-h44wj" is not "Ready", error: <nil>
	W1018 09:16:36.065474  309439 pod_ready.go:104] pod "coredns-66bc5c9577-h44wj" is not "Ready", error: <nil>
	W1018 09:16:32.111388  295389 node_ready.go:57] node "embed-certs-880603" has "Ready":"False" status (will retry)
	W1018 09:16:34.111683  295389 node_ready.go:57] node "embed-certs-880603" has "Ready":"False" status (will retry)
	W1018 09:16:36.112976  295389 node_ready.go:57] node "embed-certs-880603" has "Ready":"False" status (will retry)
	W1018 09:16:33.795029  302609 pod_ready.go:104] pod "coredns-5dd5756b68-gwttp" is not "Ready", error: <nil>
	W1018 09:16:36.295172  302609 pod_ready.go:104] pod "coredns-5dd5756b68-gwttp" is not "Ready", error: <nil>
	W1018 09:16:38.295449  302609 pod_ready.go:104] pod "coredns-5dd5756b68-gwttp" is not "Ready", error: <nil>
	I1018 09:16:35.108493  307829 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.002011819s
	I1018 09:16:35.113205  307829 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 09:16:35.113369  307829 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8444/livez
	I1018 09:16:35.113508  307829 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 09:16:35.113618  307829 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 09:16:37.315142  307829 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.201785879s
	I1018 09:16:38.892629  307829 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.779357908s
	W1018 09:16:38.565895  309439 pod_ready.go:104] pod "coredns-66bc5c9577-h44wj" is not "Ready", error: <nil>
	W1018 09:16:41.065990  309439 pod_ready.go:104] pod "coredns-66bc5c9577-h44wj" is not "Ready", error: <nil>
	I1018 09:16:41.117083  307829 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.003684215s
	I1018 09:16:41.169314  307829 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 09:16:41.210944  307829 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 09:16:41.252804  307829 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 09:16:41.253079  307829 kubeadm.go:318] [mark-control-plane] Marking the node default-k8s-diff-port-986220 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 09:16:41.306958  307829 kubeadm.go:318] [bootstrap-token] Using token: f3p04i.qkc1arqowwwf8733
	W1018 09:16:38.611800  295389 node_ready.go:57] node "embed-certs-880603" has "Ready":"False" status (will retry)
	W1018 09:16:40.612030  295389 node_ready.go:57] node "embed-certs-880603" has "Ready":"False" status (will retry)
	I1018 09:16:41.611642  295389 node_ready.go:49] node "embed-certs-880603" is "Ready"
	I1018 09:16:41.611675  295389 node_ready.go:38] duration metric: took 41.503667581s for node "embed-certs-880603" to be "Ready" ...
	I1018 09:16:41.611697  295389 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:16:41.611765  295389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:16:41.627995  295389 api_server.go:72] duration metric: took 41.84915441s to wait for apiserver process to appear ...
	I1018 09:16:41.628025  295389 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:16:41.628048  295389 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 09:16:41.633763  295389 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1018 09:16:41.635685  295389 api_server.go:141] control plane version: v1.34.1
	I1018 09:16:41.635717  295389 api_server.go:131] duration metric: took 7.685454ms to wait for apiserver health ...
	I1018 09:16:41.635728  295389 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:16:41.640645  295389 system_pods.go:59] 8 kube-system pods found
	I1018 09:16:41.640688  295389 system_pods.go:61] "coredns-66bc5c9577-7fnw7" [04bb2d33-29f9-45e9-a6b1-e2b770651c0f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:16:41.640697  295389 system_pods.go:61] "etcd-embed-certs-880603" [da7643b6-9066-4e2f-99eb-c2e6d085f539] Running
	I1018 09:16:41.640707  295389 system_pods.go:61] "kindnet-wzdm5" [20629c75-ca93-46db-875e-49d67c7b3f06] Running
	I1018 09:16:41.640713  295389 system_pods.go:61] "kube-apiserver-embed-certs-880603" [1e4ec0ef-dc43-4939-a733-02690b04d19b] Running
	I1018 09:16:41.640717  295389 system_pods.go:61] "kube-controller-manager-embed-certs-880603" [ccc2c9f0-0b2c-46e5-bea8-c82e8e3124ec] Running
	I1018 09:16:41.640720  295389 system_pods.go:61] "kube-proxy-k4kcs" [83d1821f-468a-4bf0-8fc0-e40e0668f6ff] Running
	I1018 09:16:41.640724  295389 system_pods.go:61] "kube-scheduler-embed-certs-880603" [5635b7e1-dca9-4a7e-8b9c-aa96067fd707] Running
	I1018 09:16:41.640728  295389 system_pods.go:61] "storage-provisioner" [d2aa7a09-3332-4744-9180-d307b4fc8194] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:16:41.640734  295389 system_pods.go:74] duration metric: took 5.000069ms to wait for pod list to return data ...
	I1018 09:16:41.640743  295389 default_sa.go:34] waiting for default service account to be created ...
	I1018 09:16:41.644176  295389 default_sa.go:45] found service account: "default"
	I1018 09:16:41.644203  295389 default_sa.go:55] duration metric: took 3.451989ms for default service account to be created ...
	I1018 09:16:41.644216  295389 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 09:16:41.648178  295389 system_pods.go:86] 8 kube-system pods found
	I1018 09:16:41.648208  295389 system_pods.go:89] "coredns-66bc5c9577-7fnw7" [04bb2d33-29f9-45e9-a6b1-e2b770651c0f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:16:41.648214  295389 system_pods.go:89] "etcd-embed-certs-880603" [da7643b6-9066-4e2f-99eb-c2e6d085f539] Running
	I1018 09:16:41.648220  295389 system_pods.go:89] "kindnet-wzdm5" [20629c75-ca93-46db-875e-49d67c7b3f06] Running
	I1018 09:16:41.648223  295389 system_pods.go:89] "kube-apiserver-embed-certs-880603" [1e4ec0ef-dc43-4939-a733-02690b04d19b] Running
	I1018 09:16:41.648228  295389 system_pods.go:89] "kube-controller-manager-embed-certs-880603" [ccc2c9f0-0b2c-46e5-bea8-c82e8e3124ec] Running
	I1018 09:16:41.648231  295389 system_pods.go:89] "kube-proxy-k4kcs" [83d1821f-468a-4bf0-8fc0-e40e0668f6ff] Running
	I1018 09:16:41.648235  295389 system_pods.go:89] "kube-scheduler-embed-certs-880603" [5635b7e1-dca9-4a7e-8b9c-aa96067fd707] Running
	I1018 09:16:41.648239  295389 system_pods.go:89] "storage-provisioner" [d2aa7a09-3332-4744-9180-d307b4fc8194] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:16:41.648267  295389 retry.go:31] will retry after 192.969575ms: missing components: kube-dns
	I1018 09:16:41.847560  295389 system_pods.go:86] 8 kube-system pods found
	I1018 09:16:41.847602  295389 system_pods.go:89] "coredns-66bc5c9577-7fnw7" [04bb2d33-29f9-45e9-a6b1-e2b770651c0f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:16:41.847612  295389 system_pods.go:89] "etcd-embed-certs-880603" [da7643b6-9066-4e2f-99eb-c2e6d085f539] Running
	I1018 09:16:41.847620  295389 system_pods.go:89] "kindnet-wzdm5" [20629c75-ca93-46db-875e-49d67c7b3f06] Running
	I1018 09:16:41.847626  295389 system_pods.go:89] "kube-apiserver-embed-certs-880603" [1e4ec0ef-dc43-4939-a733-02690b04d19b] Running
	I1018 09:16:41.847633  295389 system_pods.go:89] "kube-controller-manager-embed-certs-880603" [ccc2c9f0-0b2c-46e5-bea8-c82e8e3124ec] Running
	I1018 09:16:41.847637  295389 system_pods.go:89] "kube-proxy-k4kcs" [83d1821f-468a-4bf0-8fc0-e40e0668f6ff] Running
	I1018 09:16:41.847642  295389 system_pods.go:89] "kube-scheduler-embed-certs-880603" [5635b7e1-dca9-4a7e-8b9c-aa96067fd707] Running
	I1018 09:16:41.847646  295389 system_pods.go:89] "storage-provisioner" [d2aa7a09-3332-4744-9180-d307b4fc8194] Running
	I1018 09:16:41.847658  295389 system_pods.go:126] duration metric: took 203.434861ms to wait for k8s-apps to be running ...
	I1018 09:16:41.847708  295389 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 09:16:41.847760  295389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:16:41.864768  295389 system_svc.go:56] duration metric: took 17.051428ms WaitForService to wait for kubelet
	I1018 09:16:41.864801  295389 kubeadm.go:586] duration metric: took 42.085966942s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:16:41.864822  295389 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:16:41.868754  295389 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 09:16:41.868786  295389 node_conditions.go:123] node cpu capacity is 8
	I1018 09:16:41.868806  295389 node_conditions.go:105] duration metric: took 3.977808ms to run NodePressure ...
	I1018 09:16:41.868820  295389 start.go:241] waiting for startup goroutines ...
	I1018 09:16:41.868838  295389 start.go:246] waiting for cluster config update ...
	I1018 09:16:41.868852  295389 start.go:255] writing updated cluster config ...
	I1018 09:16:41.869184  295389 ssh_runner.go:195] Run: rm -f paused
	I1018 09:16:41.873479  295389 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:16:41.877518  295389 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7fnw7" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:41.882231  295389 pod_ready.go:94] pod "coredns-66bc5c9577-7fnw7" is "Ready"
	I1018 09:16:41.882258  295389 pod_ready.go:86] duration metric: took 4.717941ms for pod "coredns-66bc5c9577-7fnw7" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:41.884325  295389 pod_ready.go:83] waiting for pod "etcd-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:41.888331  295389 pod_ready.go:94] pod "etcd-embed-certs-880603" is "Ready"
	I1018 09:16:41.888374  295389 pod_ready.go:86] duration metric: took 3.985545ms for pod "etcd-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:41.890515  295389 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:41.894262  295389 pod_ready.go:94] pod "kube-apiserver-embed-certs-880603" is "Ready"
	I1018 09:16:41.894287  295389 pod_ready.go:86] duration metric: took 3.751424ms for pod "kube-apiserver-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:41.896263  295389 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:41.323567  307829 out.go:252]   - Configuring RBAC rules ...
	I1018 09:16:41.323741  307829 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 09:16:41.323891  307829 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 09:16:41.401828  307829 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 09:16:41.461336  307829 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 09:16:41.465137  307829 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 09:16:41.469707  307829 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 09:16:41.525899  307829 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 09:16:41.942881  307829 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 09:16:42.524571  307829 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 09:16:42.525447  307829 kubeadm.go:318] 
	I1018 09:16:42.525556  307829 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 09:16:42.525568  307829 kubeadm.go:318] 
	I1018 09:16:42.525684  307829 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 09:16:42.525705  307829 kubeadm.go:318] 
	I1018 09:16:42.525741  307829 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 09:16:42.525845  307829 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 09:16:42.525926  307829 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 09:16:42.525935  307829 kubeadm.go:318] 
	I1018 09:16:42.526007  307829 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 09:16:42.526017  307829 kubeadm.go:318] 
	I1018 09:16:42.526086  307829 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 09:16:42.526095  307829 kubeadm.go:318] 
	I1018 09:16:42.526162  307829 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 09:16:42.526271  307829 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 09:16:42.526404  307829 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 09:16:42.526415  307829 kubeadm.go:318] 
	I1018 09:16:42.526533  307829 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 09:16:42.526640  307829 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 09:16:42.526665  307829 kubeadm.go:318] 
	I1018 09:16:42.526797  307829 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8444 --token f3p04i.qkc1arqowwwf8733 \
	I1018 09:16:42.526958  307829 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:03f732b5d900f8eb7de41cf71a6356f3c4edf03d7a3795a959179e2391e7734f \
	I1018 09:16:42.526992  307829 kubeadm.go:318] 	--control-plane 
	I1018 09:16:42.526998  307829 kubeadm.go:318] 
	I1018 09:16:42.527113  307829 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 09:16:42.527135  307829 kubeadm.go:318] 
	I1018 09:16:42.527260  307829 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8444 --token f3p04i.qkc1arqowwwf8733 \
	I1018 09:16:42.527431  307829 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:03f732b5d900f8eb7de41cf71a6356f3c4edf03d7a3795a959179e2391e7734f 
	I1018 09:16:42.530266  307829 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1018 09:16:42.530442  307829 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 09:16:42.530478  307829 cni.go:84] Creating CNI manager for ""
	I1018 09:16:42.530499  307829 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:16:42.533104  307829 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1018 09:16:40.796098  302609 pod_ready.go:104] pod "coredns-5dd5756b68-gwttp" is not "Ready", error: <nil>
	W1018 09:16:43.293707  302609 pod_ready.go:104] pod "coredns-5dd5756b68-gwttp" is not "Ready", error: <nil>
	I1018 09:16:42.277552  295389 pod_ready.go:94] pod "kube-controller-manager-embed-certs-880603" is "Ready"
	I1018 09:16:42.277584  295389 pod_ready.go:86] duration metric: took 381.302407ms for pod "kube-controller-manager-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:42.477778  295389 pod_ready.go:83] waiting for pod "kube-proxy-k4kcs" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:42.878053  295389 pod_ready.go:94] pod "kube-proxy-k4kcs" is "Ready"
	I1018 09:16:42.878082  295389 pod_ready.go:86] duration metric: took 400.281372ms for pod "kube-proxy-k4kcs" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:43.078230  295389 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:43.478123  295389 pod_ready.go:94] pod "kube-scheduler-embed-certs-880603" is "Ready"
	I1018 09:16:43.478149  295389 pod_ready.go:86] duration metric: took 399.897961ms for pod "kube-scheduler-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:43.478161  295389 pod_ready.go:40] duration metric: took 1.604642015s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:16:43.525821  295389 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 09:16:43.527615  295389 out.go:179] * Done! kubectl is now configured to use "embed-certs-880603" cluster and "default" namespace by default
	I1018 09:16:42.534703  307829 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 09:16:42.539385  307829 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 09:16:42.539408  307829 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 09:16:42.553684  307829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 09:16:42.780522  307829 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 09:16:42.780591  307829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:16:42.780624  307829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-986220 minikube.k8s.io/updated_at=2025_10_18T09_16_42_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2a39cecdc22b5fb611b15c7501c7459c3b4d2820 minikube.k8s.io/name=default-k8s-diff-port-986220 minikube.k8s.io/primary=true
	I1018 09:16:42.792392  307829 ops.go:34] apiserver oom_adj: -16
	I1018 09:16:42.879101  307829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:16:43.380201  307829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:16:43.879591  307829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:16:44.380139  307829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1018 09:16:43.565238  309439 pod_ready.go:104] pod "coredns-66bc5c9577-h44wj" is not "Ready", error: <nil>
	W1018 09:16:46.064027  309439 pod_ready.go:104] pod "coredns-66bc5c9577-h44wj" is not "Ready", error: <nil>
	I1018 09:16:44.879531  307829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:16:45.379171  307829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:16:45.880033  307829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:16:46.380235  307829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:16:46.879320  307829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:16:46.954716  307829 kubeadm.go:1113] duration metric: took 4.174183533s to wait for elevateKubeSystemPrivileges
	I1018 09:16:46.954763  307829 kubeadm.go:402] duration metric: took 18.898789866s to StartCluster
	I1018 09:16:46.954787  307829 settings.go:142] acquiring lock: {Name:mk177870d6cf7000f95346d8b9c104ade730278a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:16:46.954887  307829 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-5897/kubeconfig
	I1018 09:16:46.956811  307829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/kubeconfig: {Name:mkbdf22e0f6c3f9f36e4cde3352b620d43ace448 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:16:46.957059  307829 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 09:16:46.957068  307829 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:16:46.957164  307829 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 09:16:46.957257  307829 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-986220"
	I1018 09:16:46.957273  307829 config.go:182] Loaded profile config "default-k8s-diff-port-986220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:16:46.957277  307829 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-986220"
	I1018 09:16:46.957273  307829 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-986220"
	I1018 09:16:46.957302  307829 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-986220"
	I1018 09:16:46.957324  307829 host.go:66] Checking if "default-k8s-diff-port-986220" exists ...
	I1018 09:16:46.957748  307829 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-986220 --format={{.State.Status}}
	I1018 09:16:46.957965  307829 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-986220 --format={{.State.Status}}
	I1018 09:16:46.959069  307829 out.go:179] * Verifying Kubernetes components...
	I1018 09:16:46.960365  307829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:16:46.985389  307829 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:16:46.986830  307829 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:16:46.986853  307829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 09:16:46.986931  307829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-986220
	I1018 09:16:46.987456  307829 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-986220"
	I1018 09:16:46.987508  307829 host.go:66] Checking if "default-k8s-diff-port-986220" exists ...
	I1018 09:16:46.988044  307829 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-986220 --format={{.State.Status}}
	I1018 09:16:47.018839  307829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/default-k8s-diff-port-986220/id_rsa Username:docker}
	I1018 09:16:47.025009  307829 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 09:16:47.025036  307829 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 09:16:47.025097  307829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-986220
	I1018 09:16:47.048123  307829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/default-k8s-diff-port-986220/id_rsa Username:docker}
	I1018 09:16:47.060152  307829 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 09:16:47.123586  307829 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:16:47.141939  307829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:16:47.165247  307829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 09:16:47.266448  307829 start.go:976] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1018 09:16:47.268332  307829 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-986220" to be "Ready" ...
	I1018 09:16:47.477534  307829 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1018 09:16:45.293985  302609 pod_ready.go:104] pod "coredns-5dd5756b68-gwttp" is not "Ready", error: <nil>
	I1018 09:16:46.294258  302609 pod_ready.go:94] pod "coredns-5dd5756b68-gwttp" is "Ready"
	I1018 09:16:46.294287  302609 pod_ready.go:86] duration metric: took 31.006532603s for pod "coredns-5dd5756b68-gwttp" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:46.297373  302609 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-951975" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:46.302066  302609 pod_ready.go:94] pod "etcd-old-k8s-version-951975" is "Ready"
	I1018 09:16:46.302090  302609 pod_ready.go:86] duration metric: took 4.692329ms for pod "etcd-old-k8s-version-951975" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:46.305138  302609 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-951975" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:46.309671  302609 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-951975" is "Ready"
	I1018 09:16:46.309694  302609 pod_ready.go:86] duration metric: took 4.527103ms for pod "kube-apiserver-old-k8s-version-951975" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:46.312739  302609 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-951975" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:46.492306  302609 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-951975" is "Ready"
	I1018 09:16:46.492330  302609 pod_ready.go:86] duration metric: took 179.571371ms for pod "kube-controller-manager-old-k8s-version-951975" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:46.692512  302609 pod_ready.go:83] waiting for pod "kube-proxy-rrzqp" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:47.093588  302609 pod_ready.go:94] pod "kube-proxy-rrzqp" is "Ready"
	I1018 09:16:47.093616  302609 pod_ready.go:86] duration metric: took 401.079405ms for pod "kube-proxy-rrzqp" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:47.294894  302609 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-951975" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:47.692538  302609 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-951975" is "Ready"
	I1018 09:16:47.692575  302609 pod_ready.go:86] duration metric: took 397.645548ms for pod "kube-scheduler-old-k8s-version-951975" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:47.692591  302609 pod_ready.go:40] duration metric: took 32.409896584s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:16:47.741097  302609 start.go:624] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1018 09:16:47.743375  302609 out.go:203] 
	W1018 09:16:47.744703  302609 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1018 09:16:47.745945  302609 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1018 09:16:47.747222  302609 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-951975" cluster and "default" namespace by default
	I1018 09:16:47.478905  307829 addons.go:514] duration metric: took 521.738605ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1018 09:16:47.773018  307829 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-986220" context rescaled to 1 replicas
	W1018 09:16:49.271321  307829 node_ready.go:57] node "default-k8s-diff-port-986220" has "Ready":"False" status (will retry)
	W1018 09:16:48.064660  309439 pod_ready.go:104] pod "coredns-66bc5c9577-h44wj" is not "Ready", error: <nil>
	W1018 09:16:50.564043  309439 pod_ready.go:104] pod "coredns-66bc5c9577-h44wj" is not "Ready", error: <nil>
	W1018 09:16:51.272001  307829 node_ready.go:57] node "default-k8s-diff-port-986220" has "Ready":"False" status (will retry)
	W1018 09:16:53.272439  307829 node_ready.go:57] node "default-k8s-diff-port-986220" has "Ready":"False" status (will retry)
	W1018 09:16:52.566364  309439 pod_ready.go:104] pod "coredns-66bc5c9577-h44wj" is not "Ready", error: <nil>
	W1018 09:16:55.064035  309439 pod_ready.go:104] pod "coredns-66bc5c9577-h44wj" is not "Ready", error: <nil>
	W1018 09:16:55.772336  307829 node_ready.go:57] node "default-k8s-diff-port-986220" has "Ready":"False" status (will retry)
	I1018 09:16:58.271807  307829 node_ready.go:49] node "default-k8s-diff-port-986220" is "Ready"
	I1018 09:16:58.271837  307829 node_ready.go:38] duration metric: took 11.003458928s for node "default-k8s-diff-port-986220" to be "Ready" ...
	I1018 09:16:58.271850  307829 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:16:58.271895  307829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:16:58.285057  307829 api_server.go:72] duration metric: took 11.327963121s to wait for apiserver process to appear ...
	I1018 09:16:58.285082  307829 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:16:58.285099  307829 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1018 09:16:58.289252  307829 api_server.go:279] https://192.168.94.2:8444/healthz returned 200:
	ok
	I1018 09:16:58.290394  307829 api_server.go:141] control plane version: v1.34.1
	I1018 09:16:58.290423  307829 api_server.go:131] duration metric: took 5.333954ms to wait for apiserver health ...
	I1018 09:16:58.290434  307829 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:16:58.293660  307829 system_pods.go:59] 8 kube-system pods found
	I1018 09:16:58.293697  307829 system_pods.go:61] "coredns-66bc5c9577-bpcsk" [d89ef1c8-1a4b-41b5-9ecf-66daaae426ba] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:16:58.293708  307829 system_pods.go:61] "etcd-default-k8s-diff-port-986220" [67037dca-39e0-4261-91f2-a5d11ca68620] Running
	I1018 09:16:58.293718  307829 system_pods.go:61] "kindnet-cj6bv" [b21a6117-74a1-4a94-9dc4-3ba0856e6712] Running
	I1018 09:16:58.293724  307829 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-986220" [26542843-30b7-4103-86cd-3f8870606b3f] Running
	I1018 09:16:58.293729  307829 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-986220" [a8b55bcd-a0ae-4ef8-9193-06104a304545] Running
	I1018 09:16:58.293735  307829 system_pods.go:61] "kube-proxy-vvtpl" [3be57d5a-db16-4280-936c-af1a1e022017] Running
	I1018 09:16:58.293741  307829 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-986220" [baa56190-deca-42c8-a305-04e13a5a0868] Running
	I1018 09:16:58.293762  307829 system_pods.go:61] "storage-provisioner" [6b5391e1-9c35-460f-b52d-8d434084db0e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:16:58.293774  307829 system_pods.go:74] duration metric: took 3.332829ms to wait for pod list to return data ...
	I1018 09:16:58.293789  307829 default_sa.go:34] waiting for default service account to be created ...
	I1018 09:16:58.296393  307829 default_sa.go:45] found service account: "default"
	I1018 09:16:58.296417  307829 default_sa.go:55] duration metric: took 2.620669ms for default service account to be created ...
	I1018 09:16:58.296426  307829 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 09:16:58.299232  307829 system_pods.go:86] 8 kube-system pods found
	I1018 09:16:58.299265  307829 system_pods.go:89] "coredns-66bc5c9577-bpcsk" [d89ef1c8-1a4b-41b5-9ecf-66daaae426ba] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:16:58.299273  307829 system_pods.go:89] "etcd-default-k8s-diff-port-986220" [67037dca-39e0-4261-91f2-a5d11ca68620] Running
	I1018 09:16:58.299281  307829 system_pods.go:89] "kindnet-cj6bv" [b21a6117-74a1-4a94-9dc4-3ba0856e6712] Running
	I1018 09:16:58.299287  307829 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-986220" [26542843-30b7-4103-86cd-3f8870606b3f] Running
	I1018 09:16:58.299294  307829 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-986220" [a8b55bcd-a0ae-4ef8-9193-06104a304545] Running
	I1018 09:16:58.299300  307829 system_pods.go:89] "kube-proxy-vvtpl" [3be57d5a-db16-4280-936c-af1a1e022017] Running
	I1018 09:16:58.299306  307829 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-986220" [baa56190-deca-42c8-a305-04e13a5a0868] Running
	I1018 09:16:58.299317  307829 system_pods.go:89] "storage-provisioner" [6b5391e1-9c35-460f-b52d-8d434084db0e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:16:58.299379  307829 retry.go:31] will retry after 222.938941ms: missing components: kube-dns
	I1018 09:16:58.526793  307829 system_pods.go:86] 8 kube-system pods found
	I1018 09:16:58.526823  307829 system_pods.go:89] "coredns-66bc5c9577-bpcsk" [d89ef1c8-1a4b-41b5-9ecf-66daaae426ba] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:16:58.526829  307829 system_pods.go:89] "etcd-default-k8s-diff-port-986220" [67037dca-39e0-4261-91f2-a5d11ca68620] Running
	I1018 09:16:58.526835  307829 system_pods.go:89] "kindnet-cj6bv" [b21a6117-74a1-4a94-9dc4-3ba0856e6712] Running
	I1018 09:16:58.526838  307829 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-986220" [26542843-30b7-4103-86cd-3f8870606b3f] Running
	I1018 09:16:58.526842  307829 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-986220" [a8b55bcd-a0ae-4ef8-9193-06104a304545] Running
	I1018 09:16:58.526845  307829 system_pods.go:89] "kube-proxy-vvtpl" [3be57d5a-db16-4280-936c-af1a1e022017] Running
	I1018 09:16:58.526848  307829 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-986220" [baa56190-deca-42c8-a305-04e13a5a0868] Running
	I1018 09:16:58.526854  307829 system_pods.go:89] "storage-provisioner" [6b5391e1-9c35-460f-b52d-8d434084db0e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:16:58.526867  307829 retry.go:31] will retry after 336.886717ms: missing components: kube-dns
	I1018 09:16:58.867886  307829 system_pods.go:86] 8 kube-system pods found
	I1018 09:16:58.867916  307829 system_pods.go:89] "coredns-66bc5c9577-bpcsk" [d89ef1c8-1a4b-41b5-9ecf-66daaae426ba] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:16:58.867921  307829 system_pods.go:89] "etcd-default-k8s-diff-port-986220" [67037dca-39e0-4261-91f2-a5d11ca68620] Running
	I1018 09:16:58.867929  307829 system_pods.go:89] "kindnet-cj6bv" [b21a6117-74a1-4a94-9dc4-3ba0856e6712] Running
	I1018 09:16:58.867935  307829 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-986220" [26542843-30b7-4103-86cd-3f8870606b3f] Running
	I1018 09:16:58.867942  307829 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-986220" [a8b55bcd-a0ae-4ef8-9193-06104a304545] Running
	I1018 09:16:58.867950  307829 system_pods.go:89] "kube-proxy-vvtpl" [3be57d5a-db16-4280-936c-af1a1e022017] Running
	I1018 09:16:58.867955  307829 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-986220" [baa56190-deca-42c8-a305-04e13a5a0868] Running
	I1018 09:16:58.867966  307829 system_pods.go:89] "storage-provisioner" [6b5391e1-9c35-460f-b52d-8d434084db0e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:16:58.867984  307829 retry.go:31] will retry after 410.478388ms: missing components: kube-dns
	I1018 09:16:59.283325  307829 system_pods.go:86] 8 kube-system pods found
	I1018 09:16:59.283375  307829 system_pods.go:89] "coredns-66bc5c9577-bpcsk" [d89ef1c8-1a4b-41b5-9ecf-66daaae426ba] Running
	I1018 09:16:59.283385  307829 system_pods.go:89] "etcd-default-k8s-diff-port-986220" [67037dca-39e0-4261-91f2-a5d11ca68620] Running
	I1018 09:16:59.283393  307829 system_pods.go:89] "kindnet-cj6bv" [b21a6117-74a1-4a94-9dc4-3ba0856e6712] Running
	I1018 09:16:59.283399  307829 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-986220" [26542843-30b7-4103-86cd-3f8870606b3f] Running
	I1018 09:16:59.283404  307829 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-986220" [a8b55bcd-a0ae-4ef8-9193-06104a304545] Running
	I1018 09:16:59.283409  307829 system_pods.go:89] "kube-proxy-vvtpl" [3be57d5a-db16-4280-936c-af1a1e022017] Running
	I1018 09:16:59.283417  307829 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-986220" [baa56190-deca-42c8-a305-04e13a5a0868] Running
	I1018 09:16:59.283421  307829 system_pods.go:89] "storage-provisioner" [6b5391e1-9c35-460f-b52d-8d434084db0e] Running
	I1018 09:16:59.283432  307829 system_pods.go:126] duration metric: took 986.998816ms to wait for k8s-apps to be running ...
	I1018 09:16:59.283448  307829 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 09:16:59.283507  307829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:16:59.299136  307829 system_svc.go:56] duration metric: took 15.661893ms WaitForService to wait for kubelet
	I1018 09:16:59.299172  307829 kubeadm.go:586] duration metric: took 12.342081392s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:16:59.299193  307829 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:16:59.302607  307829 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 09:16:59.302636  307829 node_conditions.go:123] node cpu capacity is 8
	I1018 09:16:59.302648  307829 node_conditions.go:105] duration metric: took 3.45011ms to run NodePressure ...
	I1018 09:16:59.302660  307829 start.go:241] waiting for startup goroutines ...
	I1018 09:16:59.302666  307829 start.go:246] waiting for cluster config update ...
	I1018 09:16:59.302677  307829 start.go:255] writing updated cluster config ...
	I1018 09:16:59.303823  307829 ssh_runner.go:195] Run: rm -f paused
	I1018 09:16:59.308246  307829 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:16:59.313062  307829 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bpcsk" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:59.318310  307829 pod_ready.go:94] pod "coredns-66bc5c9577-bpcsk" is "Ready"
	I1018 09:16:59.318333  307829 pod_ready.go:86] duration metric: took 5.242596ms for pod "coredns-66bc5c9577-bpcsk" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:59.320834  307829 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-986220" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:59.325424  307829 pod_ready.go:94] pod "etcd-default-k8s-diff-port-986220" is "Ready"
	I1018 09:16:59.325454  307829 pod_ready.go:86] duration metric: took 4.599009ms for pod "etcd-default-k8s-diff-port-986220" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:59.327639  307829 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-986220" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:59.332166  307829 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-986220" is "Ready"
	I1018 09:16:59.332189  307829 pod_ready.go:86] duration metric: took 4.523446ms for pod "kube-apiserver-default-k8s-diff-port-986220" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:59.334399  307829 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-986220" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:59.714156  307829 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-986220" is "Ready"
	I1018 09:16:59.714185  307829 pod_ready.go:86] duration metric: took 379.76296ms for pod "kube-controller-manager-default-k8s-diff-port-986220" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:16:59.913655  307829 pod_ready.go:83] waiting for pod "kube-proxy-vvtpl" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:00.313187  307829 pod_ready.go:94] pod "kube-proxy-vvtpl" is "Ready"
	I1018 09:17:00.313213  307829 pod_ready.go:86] duration metric: took 399.535476ms for pod "kube-proxy-vvtpl" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:00.514099  307829 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-986220" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:00.914005  307829 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-986220" is "Ready"
	I1018 09:17:00.914034  307829 pod_ready.go:86] duration metric: took 399.907153ms for pod "kube-scheduler-default-k8s-diff-port-986220" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:00.914046  307829 pod_ready.go:40] duration metric: took 1.605764724s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:17:00.963127  307829 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 09:17:00.965147  307829 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-986220" cluster and "default" namespace by default
	W1018 09:16:57.064222  309439 pod_ready.go:104] pod "coredns-66bc5c9577-h44wj" is not "Ready", error: <nil>
	W1018 09:16:59.064638  309439 pod_ready.go:104] pod "coredns-66bc5c9577-h44wj" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 18 09:16:34 old-k8s-version-951975 crio[561]: time="2025-10-18T09:16:34.426913522Z" level=info msg="Started container" PID=1706 containerID=c86035a98d2e811d86f1f11558369ba8ac7d1618e695578b959012bc810d6cfe description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zdj6d/dashboard-metrics-scraper id=65c7597e-8ea2-4732-94ea-6e1562da1940 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e165057c7cc8b14fb63454b0740dfeb6ed8f3117a94c168883cfb9822356007c
	Oct 18 09:16:35 old-k8s-version-951975 crio[561]: time="2025-10-18T09:16:35.381463137Z" level=info msg="Removing container: 1bfa96685a18fe9ed455af32503237123856585da9cd413b1048c58c828adbd5" id=c0d7fc08-261f-445b-a9e1-c3059f068ec6 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 09:16:35 old-k8s-version-951975 crio[561]: time="2025-10-18T09:16:35.394135672Z" level=info msg="Removed container 1bfa96685a18fe9ed455af32503237123856585da9cd413b1048c58c828adbd5: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zdj6d/dashboard-metrics-scraper" id=c0d7fc08-261f-445b-a9e1-c3059f068ec6 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 09:16:45 old-k8s-version-951975 crio[561]: time="2025-10-18T09:16:45.407665795Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=ad53ecd6-a5b5-472d-9d8f-89a5915d8acd name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:16:45 old-k8s-version-951975 crio[561]: time="2025-10-18T09:16:45.408714106Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=3c95a7e7-6160-43cd-b4c7-8aba2a8a9471 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:16:45 old-k8s-version-951975 crio[561]: time="2025-10-18T09:16:45.410291362Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=adccd699-6b5d-41e2-9b63-100b00fca986 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:16:45 old-k8s-version-951975 crio[561]: time="2025-10-18T09:16:45.414007422Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:16:45 old-k8s-version-951975 crio[561]: time="2025-10-18T09:16:45.420551436Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:16:45 old-k8s-version-951975 crio[561]: time="2025-10-18T09:16:45.420723163Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/eeff2469445ced83e17307204e3a6fc44e67a8d3667b3e127180227b3e96d406/merged/etc/passwd: no such file or directory"
	Oct 18 09:16:45 old-k8s-version-951975 crio[561]: time="2025-10-18T09:16:45.420751506Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/eeff2469445ced83e17307204e3a6fc44e67a8d3667b3e127180227b3e96d406/merged/etc/group: no such file or directory"
	Oct 18 09:16:45 old-k8s-version-951975 crio[561]: time="2025-10-18T09:16:45.421035499Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:16:45 old-k8s-version-951975 crio[561]: time="2025-10-18T09:16:45.449780046Z" level=info msg="Created container dd5dcb7d66045a0152ff8c078146a86a98fcc49d2df4f7b6dd15a94d89058078: kube-system/storage-provisioner/storage-provisioner" id=adccd699-6b5d-41e2-9b63-100b00fca986 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:16:45 old-k8s-version-951975 crio[561]: time="2025-10-18T09:16:45.45047372Z" level=info msg="Starting container: dd5dcb7d66045a0152ff8c078146a86a98fcc49d2df4f7b6dd15a94d89058078" id=a54e2822-1f4b-456b-9aaa-2c113750295d name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:16:45 old-k8s-version-951975 crio[561]: time="2025-10-18T09:16:45.452405741Z" level=info msg="Started container" PID=1725 containerID=dd5dcb7d66045a0152ff8c078146a86a98fcc49d2df4f7b6dd15a94d89058078 description=kube-system/storage-provisioner/storage-provisioner id=a54e2822-1f4b-456b-9aaa-2c113750295d name=/runtime.v1.RuntimeService/StartContainer sandboxID=6467b0224b47e15db874d81ac86ce634390aec4c2f6b72fb5fb296bfcf5aadb6
	Oct 18 09:16:49 old-k8s-version-951975 crio[561]: time="2025-10-18T09:16:49.278240973Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b5d8df68-6115-4e4a-aed2-ae00a4b47579 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:16:49 old-k8s-version-951975 crio[561]: time="2025-10-18T09:16:49.279277607Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=2f834347-105d-48d7-9ac3-d136d1d5bfc6 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:16:49 old-k8s-version-951975 crio[561]: time="2025-10-18T09:16:49.280294822Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zdj6d/dashboard-metrics-scraper" id=276118ca-f3d5-44a0-aa8d-5c1b84ed523a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:16:49 old-k8s-version-951975 crio[561]: time="2025-10-18T09:16:49.280591762Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:16:49 old-k8s-version-951975 crio[561]: time="2025-10-18T09:16:49.28600226Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:16:49 old-k8s-version-951975 crio[561]: time="2025-10-18T09:16:49.286492634Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:16:49 old-k8s-version-951975 crio[561]: time="2025-10-18T09:16:49.323607414Z" level=info msg="Created container f59a4536a6be07648a7d886609196032c6dd09725a1a2250b74c79dd1ca7a6ee: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zdj6d/dashboard-metrics-scraper" id=276118ca-f3d5-44a0-aa8d-5c1b84ed523a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:16:49 old-k8s-version-951975 crio[561]: time="2025-10-18T09:16:49.324302697Z" level=info msg="Starting container: f59a4536a6be07648a7d886609196032c6dd09725a1a2250b74c79dd1ca7a6ee" id=ed99e408-266e-47f5-b569-ab81c2443184 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:16:49 old-k8s-version-951975 crio[561]: time="2025-10-18T09:16:49.32652895Z" level=info msg="Started container" PID=1762 containerID=f59a4536a6be07648a7d886609196032c6dd09725a1a2250b74c79dd1ca7a6ee description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zdj6d/dashboard-metrics-scraper id=ed99e408-266e-47f5-b569-ab81c2443184 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e165057c7cc8b14fb63454b0740dfeb6ed8f3117a94c168883cfb9822356007c
	Oct 18 09:16:49 old-k8s-version-951975 crio[561]: time="2025-10-18T09:16:49.422888771Z" level=info msg="Removing container: c86035a98d2e811d86f1f11558369ba8ac7d1618e695578b959012bc810d6cfe" id=6fcce697-eccd-4c8a-b809-dabbb7adc3bd name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 09:16:49 old-k8s-version-951975 crio[561]: time="2025-10-18T09:16:49.432927701Z" level=info msg="Removed container c86035a98d2e811d86f1f11558369ba8ac7d1618e695578b959012bc810d6cfe: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zdj6d/dashboard-metrics-scraper" id=6fcce697-eccd-4c8a-b809-dabbb7adc3bd name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	f59a4536a6be0       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           15 seconds ago      Exited              dashboard-metrics-scraper   2                   e165057c7cc8b       dashboard-metrics-scraper-5f989dc9cf-zdj6d       kubernetes-dashboard
	dd5dcb7d66045       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           19 seconds ago      Running             storage-provisioner         1                   6467b0224b47e       storage-provisioner                              kube-system
	1731b366ef3fd       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   32 seconds ago      Running             kubernetes-dashboard        0                   520ec419f0c84       kubernetes-dashboard-8694d4445c-qms7p            kubernetes-dashboard
	850ecec987439       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           50 seconds ago      Running             coredns                     0                   6b5019e0a7a2b       coredns-5dd5756b68-gwttp                         kube-system
	9a9fa34e51033       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           50 seconds ago      Running             busybox                     1                   24ee2b343ec2d       busybox                                          default
	e13798224f38d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           50 seconds ago      Exited              storage-provisioner         0                   6467b0224b47e       storage-provisioner                              kube-system
	37cdebb50e345       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           50 seconds ago      Running             kube-proxy                  0                   0527ef968c498       kube-proxy-rrzqp                                 kube-system
	c707d266c99b4       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           50 seconds ago      Running             kindnet-cni                 0                   926cfe19e44bf       kindnet-k2756                                    kube-system
	16a4a0198ff18       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           54 seconds ago      Running             kube-controller-manager     0                   81adcb4e1b7bc       kube-controller-manager-old-k8s-version-951975   kube-system
	b2de01dc9072c       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           54 seconds ago      Running             kube-scheduler              0                   f3ee9d25a0bc8       kube-scheduler-old-k8s-version-951975            kube-system
	f2e7310b9fd30       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           54 seconds ago      Running             kube-apiserver              0                   21f7b008a8428       kube-apiserver-old-k8s-version-951975            kube-system
	1d1d7b9a46038       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           54 seconds ago      Running             etcd                        0                   e7764baa63e9a       etcd-old-k8s-version-951975                      kube-system
	
	
	==> coredns [850ecec987439ee84e6448cada291df9cce48b7f0c730a4f0638f43a13af3bc0] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:52869 - 7986 "HINFO IN 6955480671343103566.8601139003691296637. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.865316514s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-951975
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-951975
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a39cecdc22b5fb611b15c7501c7459c3b4d2820
	                    minikube.k8s.io/name=old-k8s-version-951975
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T09_15_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 09:15:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-951975
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 09:16:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 09:16:44 +0000   Sat, 18 Oct 2025 09:15:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 09:16:44 +0000   Sat, 18 Oct 2025 09:15:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 09:16:44 +0000   Sat, 18 Oct 2025 09:15:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 09:16:44 +0000   Sat, 18 Oct 2025 09:15:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-951975
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                bca7ca56-ad4d-4955-80a5-36cf90a3bf8e
	  Boot ID:                    e8d7ef1f-87bb-488c-8381-e18fe85b484f
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-5dd5756b68-gwttp                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     107s
	  kube-system                 etcd-old-k8s-version-951975                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         119s
	  kube-system                 kindnet-k2756                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      107s
	  kube-system                 kube-apiserver-old-k8s-version-951975             250m (3%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-controller-manager-old-k8s-version-951975    200m (2%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-proxy-rrzqp                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-old-k8s-version-951975             100m (1%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-zdj6d        0 (0%)        0 (0%)      0 (0%)           0 (0%)         37s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-qms7p             0 (0%)        0 (0%)      0 (0%)           0 (0%)         37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 106s                 kube-proxy       
	  Normal  Starting                 50s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m5s (x8 over 2m5s)  kubelet          Node old-k8s-version-951975 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m5s (x8 over 2m5s)  kubelet          Node old-k8s-version-951975 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m5s (x8 over 2m5s)  kubelet          Node old-k8s-version-951975 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     119s                 kubelet          Node old-k8s-version-951975 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  119s                 kubelet          Node old-k8s-version-951975 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    119s                 kubelet          Node old-k8s-version-951975 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 119s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           108s                 node-controller  Node old-k8s-version-951975 event: Registered Node old-k8s-version-951975 in Controller
	  Normal  NodeReady                93s                  kubelet          Node old-k8s-version-951975 status is now: NodeReady
	  Normal  Starting                 54s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 54s)    kubelet          Node old-k8s-version-951975 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 54s)    kubelet          Node old-k8s-version-951975 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 54s)    kubelet          Node old-k8s-version-951975 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           38s                  node-controller  Node old-k8s-version-951975 event: Registered Node old-k8s-version-951975 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3e 73 e0 1e 4f 5c 08 06
	[  +0.001176] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9e 01 6a be c1 ed 08 06
	[  +1.096145] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 92 07 d0 c5 bc 08 06
	[  +0.000393] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff de 8d 0a a3 cc 78 08 06
	[ +17.591772] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 8a 16 36 e8 43 c0 08 06
	[  +0.000413] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3e 73 e0 1e 4f 5c 08 06
	[ +11.820741] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 36 5b 8e 46 ea 47 08 06
	[Oct18 09:15] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 28 86 11 d4 e9 08 06
	[  +0.032974] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 76 2d 83 26 2e 28 08 06
	[  +4.435535] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 22 e2 07 5a 3b 4a 08 06
	[  +0.000382] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 36 5b 8e 46 ea 47 08 06
	[ +43.809014] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 22 6f 4b 2b 7f 46 08 06
	[  +0.000367] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 62 28 86 11 d4 e9 08 06
	
	
	==> etcd [1d1d7b9a4603835edfbcabef69e64877d18a1499301245bf79771003e000b780] <==
	{"level":"info","ts":"2025-10-18T09:16:10.853815Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-18T09:16:10.854271Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 switched to configuration voters=(17451554867067011209)"}
	{"level":"info","ts":"2025-10-18T09:16:10.854394Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"]}
	{"level":"info","ts":"2025-10-18T09:16:10.854541Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-18T09:16:10.854574Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-18T09:16:10.853979Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-18T09:16:10.856237Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-18T09:16:10.856498Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-18T09:16:10.856531Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-18T09:16:10.856632Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-10-18T09:16:10.856646Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-10-18T09:16:12.543825Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-18T09:16:12.5439Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-18T09:16:12.543919Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-10-18T09:16:12.543941Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2025-10-18T09:16:12.543947Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-10-18T09:16:12.543955Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2025-10-18T09:16:12.543965Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-10-18T09:16:12.545465Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-951975 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-18T09:16:12.545467Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-18T09:16:12.545502Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-18T09:16:12.545964Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-18T09:16:12.54602Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-18T09:16:12.547779Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-10-18T09:16:12.547784Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 09:17:05 up 59 min,  0 user,  load average: 3.47, 3.41, 2.36
	Linux old-k8s-version-951975 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c707d266c99b4e19a4a07275b8e8367a1594b6cf94012a72f161afb9027cd1cf] <==
	I1018 09:16:14.852506       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 09:16:14.852763       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1018 09:16:14.852964       1 main.go:148] setting mtu 1500 for CNI 
	I1018 09:16:14.852988       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 09:16:14.853002       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T09:16:15Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 09:16:15.153693       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 09:16:15.154240       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 09:16:15.154264       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 09:16:15.154480       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 09:16:15.555079       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 09:16:15.555105       1 metrics.go:72] Registering metrics
	I1018 09:16:15.555175       1 controller.go:711] "Syncing nftables rules"
	I1018 09:16:25.153911       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1018 09:16:25.153966       1 main.go:301] handling current node
	I1018 09:16:35.153454       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1018 09:16:35.153506       1 main.go:301] handling current node
	I1018 09:16:45.153453       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1018 09:16:45.153484       1 main.go:301] handling current node
	I1018 09:16:55.158383       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1018 09:16:55.158419       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f2e7310b9fd30510062cf4fc3f3196d0199a8bb693ccf374fe7926da05bc717a] <==
	I1018 09:16:13.534877       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1018 09:16:13.606105       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 09:16:13.630546       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1018 09:16:13.630569       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1018 09:16:13.630567       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1018 09:16:13.630895       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 09:16:13.631498       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1018 09:16:13.636761       1 shared_informer.go:318] Caches are synced for configmaps
	I1018 09:16:13.652396       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1018 09:16:13.669873       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1018 09:16:13.669983       1 aggregator.go:166] initial CRD sync complete...
	I1018 09:16:13.670015       1 autoregister_controller.go:141] Starting autoregister controller
	I1018 09:16:13.670044       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 09:16:13.670057       1 cache.go:39] Caches are synced for autoregister controller
	I1018 09:16:14.521542       1 controller.go:624] quota admission added evaluator for: namespaces
	I1018 09:16:14.533856       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 09:16:14.563691       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1018 09:16:14.611939       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 09:16:14.627857       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 09:16:14.643013       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1018 09:16:14.708585       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.33.157"}
	I1018 09:16:14.725387       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.39.212"}
	I1018 09:16:26.832066       1 controller.go:624] quota admission added evaluator for: endpoints
	I1018 09:16:27.078814       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1018 09:16:27.128533       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [16a4a0198ff18096f38de4bc58c31bf5f03bdf37076c3c4e4d32e4fb7d38b886] <==
	I1018 09:16:26.985108       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="90.212µs"
	I1018 09:16:27.086201       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1018 09:16:27.086536       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1018 09:16:27.096418       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-qms7p"
	I1018 09:16:27.097114       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-zdj6d"
	I1018 09:16:27.102373       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="16.690529ms"
	I1018 09:16:27.106011       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="20.148807ms"
	I1018 09:16:27.110295       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="7.862564ms"
	I1018 09:16:27.110436       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="91.363µs"
	I1018 09:16:27.116160       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="10.089678ms"
	I1018 09:16:27.116374       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="62.545µs"
	I1018 09:16:27.123809       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="89.048µs"
	I1018 09:16:27.143767       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="127.979µs"
	I1018 09:16:27.153924       1 shared_informer.go:318] Caches are synced for garbage collector
	I1018 09:16:27.173907       1 shared_informer.go:318] Caches are synced for garbage collector
	I1018 09:16:27.173943       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1018 09:16:32.393636       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="11.734772ms"
	I1018 09:16:32.394733       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="115.652µs"
	I1018 09:16:34.386868       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="87.124µs"
	I1018 09:16:35.391866       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="191.196µs"
	I1018 09:16:36.400499       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="179.455µs"
	I1018 09:16:46.002554       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.426269ms"
	I1018 09:16:46.002699       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="88.449µs"
	I1018 09:16:49.434091       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="84.085µs"
	I1018 09:16:57.419672       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="86.023µs"
	
	
	==> kube-proxy [37cdebb50e3452e3797b2554403f29b4e05357c580e8231a729dc63a87d0f932] <==
	I1018 09:16:14.719539       1 server_others.go:69] "Using iptables proxy"
	I1018 09:16:14.729544       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1018 09:16:14.752777       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 09:16:14.755817       1 server_others.go:152] "Using iptables Proxier"
	I1018 09:16:14.755867       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1018 09:16:14.755878       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1018 09:16:14.755920       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1018 09:16:14.757098       1 server.go:846] "Version info" version="v1.28.0"
	I1018 09:16:14.757128       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:16:14.758251       1 config.go:188] "Starting service config controller"
	I1018 09:16:14.758297       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1018 09:16:14.758479       1 config.go:315] "Starting node config controller"
	I1018 09:16:14.758551       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1018 09:16:14.758490       1 config.go:97] "Starting endpoint slice config controller"
	I1018 09:16:14.758609       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1018 09:16:14.859013       1 shared_informer.go:318] Caches are synced for service config
	I1018 09:16:14.859034       1 shared_informer.go:318] Caches are synced for node config
	I1018 09:16:14.860179       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [b2de01dc9072ccffefae3182aec6a17d04655623980355d4f88424a0d4e01818] <==
	I1018 09:16:11.463746       1 serving.go:348] Generated self-signed cert in-memory
	W1018 09:16:13.594708       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 09:16:13.594754       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 09:16:13.594767       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 09:16:13.594777       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 09:16:13.615789       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1018 09:16:13.615815       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:16:13.617170       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:16:13.617207       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1018 09:16:13.618049       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1018 09:16:13.618083       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1018 09:16:13.717632       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 18 09:16:27 old-k8s-version-951975 kubelet[718]: I1018 09:16:27.106911     718 topology_manager.go:215] "Topology Admit Handler" podUID="55b6301a-677b-42eb-90f9-ff3b66ddb759" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-qms7p"
	Oct 18 09:16:27 old-k8s-version-951975 kubelet[718]: I1018 09:16:27.179146     718 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9xhh\" (UniqueName: \"kubernetes.io/projected/ee98ebcc-4473-4573-a4a4-e4f65da59d9b-kube-api-access-b9xhh\") pod \"dashboard-metrics-scraper-5f989dc9cf-zdj6d\" (UID: \"ee98ebcc-4473-4573-a4a4-e4f65da59d9b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zdj6d"
	Oct 18 09:16:27 old-k8s-version-951975 kubelet[718]: I1018 09:16:27.179207     718 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ee98ebcc-4473-4573-a4a4-e4f65da59d9b-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-zdj6d\" (UID: \"ee98ebcc-4473-4573-a4a4-e4f65da59d9b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zdj6d"
	Oct 18 09:16:27 old-k8s-version-951975 kubelet[718]: I1018 09:16:27.179244     718 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpqz5\" (UniqueName: \"kubernetes.io/projected/55b6301a-677b-42eb-90f9-ff3b66ddb759-kube-api-access-lpqz5\") pod \"kubernetes-dashboard-8694d4445c-qms7p\" (UID: \"55b6301a-677b-42eb-90f9-ff3b66ddb759\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-qms7p"
	Oct 18 09:16:27 old-k8s-version-951975 kubelet[718]: I1018 09:16:27.179362     718 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/55b6301a-677b-42eb-90f9-ff3b66ddb759-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-qms7p\" (UID: \"55b6301a-677b-42eb-90f9-ff3b66ddb759\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-qms7p"
	Oct 18 09:16:32 old-k8s-version-951975 kubelet[718]: I1018 09:16:32.382438     718 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-qms7p" podStartSLOduration=0.820473436 podCreationTimestamp="2025-10-18 09:16:27 +0000 UTC" firstStartedPulling="2025-10-18 09:16:27.435627203 +0000 UTC m=+17.268674601" lastFinishedPulling="2025-10-18 09:16:31.99752505 +0000 UTC m=+21.830572495" observedRunningTime="2025-10-18 09:16:32.38212967 +0000 UTC m=+22.215177083" watchObservedRunningTime="2025-10-18 09:16:32.38237133 +0000 UTC m=+22.215418744"
	Oct 18 09:16:34 old-k8s-version-951975 kubelet[718]: I1018 09:16:34.375183     718 scope.go:117] "RemoveContainer" containerID="1bfa96685a18fe9ed455af32503237123856585da9cd413b1048c58c828adbd5"
	Oct 18 09:16:35 old-k8s-version-951975 kubelet[718]: I1018 09:16:35.380075     718 scope.go:117] "RemoveContainer" containerID="1bfa96685a18fe9ed455af32503237123856585da9cd413b1048c58c828adbd5"
	Oct 18 09:16:35 old-k8s-version-951975 kubelet[718]: I1018 09:16:35.380259     718 scope.go:117] "RemoveContainer" containerID="c86035a98d2e811d86f1f11558369ba8ac7d1618e695578b959012bc810d6cfe"
	Oct 18 09:16:35 old-k8s-version-951975 kubelet[718]: E1018 09:16:35.380714     718 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-zdj6d_kubernetes-dashboard(ee98ebcc-4473-4573-a4a4-e4f65da59d9b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zdj6d" podUID="ee98ebcc-4473-4573-a4a4-e4f65da59d9b"
	Oct 18 09:16:36 old-k8s-version-951975 kubelet[718]: I1018 09:16:36.384887     718 scope.go:117] "RemoveContainer" containerID="c86035a98d2e811d86f1f11558369ba8ac7d1618e695578b959012bc810d6cfe"
	Oct 18 09:16:36 old-k8s-version-951975 kubelet[718]: E1018 09:16:36.385218     718 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-zdj6d_kubernetes-dashboard(ee98ebcc-4473-4573-a4a4-e4f65da59d9b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zdj6d" podUID="ee98ebcc-4473-4573-a4a4-e4f65da59d9b"
	Oct 18 09:16:37 old-k8s-version-951975 kubelet[718]: I1018 09:16:37.408684     718 scope.go:117] "RemoveContainer" containerID="c86035a98d2e811d86f1f11558369ba8ac7d1618e695578b959012bc810d6cfe"
	Oct 18 09:16:37 old-k8s-version-951975 kubelet[718]: E1018 09:16:37.409031     718 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-zdj6d_kubernetes-dashboard(ee98ebcc-4473-4573-a4a4-e4f65da59d9b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zdj6d" podUID="ee98ebcc-4473-4573-a4a4-e4f65da59d9b"
	Oct 18 09:16:45 old-k8s-version-951975 kubelet[718]: I1018 09:16:45.407115     718 scope.go:117] "RemoveContainer" containerID="e13798224f38d89856ac0d589f0dbef9694affee05d404ed03bc1423d5b36d66"
	Oct 18 09:16:49 old-k8s-version-951975 kubelet[718]: I1018 09:16:49.277571     718 scope.go:117] "RemoveContainer" containerID="c86035a98d2e811d86f1f11558369ba8ac7d1618e695578b959012bc810d6cfe"
	Oct 18 09:16:49 old-k8s-version-951975 kubelet[718]: I1018 09:16:49.421644     718 scope.go:117] "RemoveContainer" containerID="c86035a98d2e811d86f1f11558369ba8ac7d1618e695578b959012bc810d6cfe"
	Oct 18 09:16:49 old-k8s-version-951975 kubelet[718]: I1018 09:16:49.421892     718 scope.go:117] "RemoveContainer" containerID="f59a4536a6be07648a7d886609196032c6dd09725a1a2250b74c79dd1ca7a6ee"
	Oct 18 09:16:49 old-k8s-version-951975 kubelet[718]: E1018 09:16:49.422263     718 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-zdj6d_kubernetes-dashboard(ee98ebcc-4473-4573-a4a4-e4f65da59d9b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zdj6d" podUID="ee98ebcc-4473-4573-a4a4-e4f65da59d9b"
	Oct 18 09:16:57 old-k8s-version-951975 kubelet[718]: I1018 09:16:57.408932     718 scope.go:117] "RemoveContainer" containerID="f59a4536a6be07648a7d886609196032c6dd09725a1a2250b74c79dd1ca7a6ee"
	Oct 18 09:16:57 old-k8s-version-951975 kubelet[718]: E1018 09:16:57.409272     718 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-zdj6d_kubernetes-dashboard(ee98ebcc-4473-4573-a4a4-e4f65da59d9b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zdj6d" podUID="ee98ebcc-4473-4573-a4a4-e4f65da59d9b"
	Oct 18 09:16:59 old-k8s-version-951975 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 09:16:59 old-k8s-version-951975 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 09:16:59 old-k8s-version-951975 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 18 09:16:59 old-k8s-version-951975 systemd[1]: kubelet.service: Consumed 1.603s CPU time.
	
	
	==> kubernetes-dashboard [1731b366ef3fded158839dbcd6cc44068387d425b2e39024818c85643cff484e] <==
	2025/10/18 09:16:32 Starting overwatch
	2025/10/18 09:16:32 Using namespace: kubernetes-dashboard
	2025/10/18 09:16:32 Using in-cluster config to connect to apiserver
	2025/10/18 09:16:32 Using secret token for csrf signing
	2025/10/18 09:16:32 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 09:16:32 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 09:16:32 Successful initial request to the apiserver, version: v1.28.0
	2025/10/18 09:16:32 Generating JWE encryption key
	2025/10/18 09:16:32 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 09:16:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 09:16:32 Initializing JWE encryption key from synchronized object
	2025/10/18 09:16:32 Creating in-cluster Sidecar client
	2025/10/18 09:16:32 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 09:16:32 Serving insecurely on HTTP port: 9090
	2025/10/18 09:17:02 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [dd5dcb7d66045a0152ff8c078146a86a98fcc49d2df4f7b6dd15a94d89058078] <==
	I1018 09:16:45.465666       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 09:16:45.475117       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 09:16:45.475168       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1018 09:17:02.876197       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 09:17:02.876383       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"27b4ae7e-91ad-46cf-b758-f945092ba79c", APIVersion:"v1", ResourceVersion:"657", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-951975_903bd012-b300-4dcd-8ed2-63bbd3769fea became leader
	I1018 09:17:02.876453       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-951975_903bd012-b300-4dcd-8ed2-63bbd3769fea!
	I1018 09:17:02.977145       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-951975_903bd012-b300-4dcd-8ed2-63bbd3769fea!
	
	
	==> storage-provisioner [e13798224f38d89856ac0d589f0dbef9694affee05d404ed03bc1423d5b36d66] <==
	I1018 09:16:14.684717       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 09:16:44.690170       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-951975 -n old-k8s-version-951975
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-951975 -n old-k8s-version-951975: exit status 2 (343.116406ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-951975 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-986220 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-986220 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (251.297906ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:17:09Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-986220 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-986220 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-986220 describe deploy/metrics-server -n kube-system: exit status 1 (67.585998ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-986220 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-986220
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-986220:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "48881c0b9d8337dba348ebf21ea33e5939947c73c9cc7a2773507be18d3ba575",
	        "Created": "2025-10-18T09:16:19.86673265Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 308660,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T09:16:19.913296124Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/48881c0b9d8337dba348ebf21ea33e5939947c73c9cc7a2773507be18d3ba575/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/48881c0b9d8337dba348ebf21ea33e5939947c73c9cc7a2773507be18d3ba575/hostname",
	        "HostsPath": "/var/lib/docker/containers/48881c0b9d8337dba348ebf21ea33e5939947c73c9cc7a2773507be18d3ba575/hosts",
	        "LogPath": "/var/lib/docker/containers/48881c0b9d8337dba348ebf21ea33e5939947c73c9cc7a2773507be18d3ba575/48881c0b9d8337dba348ebf21ea33e5939947c73c9cc7a2773507be18d3ba575-json.log",
	        "Name": "/default-k8s-diff-port-986220",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-986220:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-986220",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "48881c0b9d8337dba348ebf21ea33e5939947c73c9cc7a2773507be18d3ba575",
	                "LowerDir": "/var/lib/docker/overlay2/ebca512855d8718cc57a74f0c5a7cb78a8d4717430e6e9b0fbcfa814a3464016-init/diff:/var/lib/docker/overlay2/76f783f469ac4c930bc111d7df4bd2b3a57bdcd762971c7ce0ba7a7b959771a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ebca512855d8718cc57a74f0c5a7cb78a8d4717430e6e9b0fbcfa814a3464016/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ebca512855d8718cc57a74f0c5a7cb78a8d4717430e6e9b0fbcfa814a3464016/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ebca512855d8718cc57a74f0c5a7cb78a8d4717430e6e9b0fbcfa814a3464016/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-986220",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-986220/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-986220",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-986220",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-986220",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f55b50886228c35ba3fafb6d17dec29907078977eb479a48fd6248360a5a9146",
	            "SandboxKey": "/var/run/docker/netns/f55b50886228",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-986220": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:7f:27:c2:65:ac",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ef55982bb9e9da39f5725d618404d1c9094984213effce96590128a5ebc25231",
	                    "EndpointID": "88f8edb856e3600419fd44cebe6b0bbd33a8616c385a1d83b516fc45d4d13411",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-986220",
	                        "48881c0b9d83"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-986220 -n default-k8s-diff-port-986220
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-986220 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-986220 logs -n 25: (1.425636793s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p enable-default-cni-448954 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                    │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │                     │
	│ ssh     │ -p enable-default-cni-448954 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                              │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ ssh     │ -p enable-default-cni-448954 sudo cri-dockerd --version                                                                                                                                                                                       │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ ssh     │ -p enable-default-cni-448954 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                         │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │                     │
	│ ssh     │ -p enable-default-cni-448954 sudo systemctl cat containerd --no-pager                                                                                                                                                                         │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ ssh     │ -p enable-default-cni-448954 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                  │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ ssh     │ -p enable-default-cni-448954 sudo cat /etc/containerd/config.toml                                                                                                                                                                             │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ ssh     │ -p enable-default-cni-448954 sudo containerd config dump                                                                                                                                                                                      │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ ssh     │ -p enable-default-cni-448954 sudo systemctl status crio --all --full --no-pager                                                                                                                                                               │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ ssh     │ -p enable-default-cni-448954 sudo systemctl cat crio --no-pager                                                                                                                                                                               │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ ssh     │ -p enable-default-cni-448954 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                     │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ ssh     │ -p enable-default-cni-448954 sudo crio config                                                                                                                                                                                                 │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ delete  │ -p enable-default-cni-448954                                                                                                                                                                                                                  │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ delete  │ -p disable-driver-mounts-634520                                                                                                                                                                                                               │ disable-driver-mounts-634520 │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ start   │ -p default-k8s-diff-port-986220 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-986220 │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:17 UTC │
	│ addons  │ enable dashboard -p no-preload-031066 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-031066            │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ start   │ -p no-preload-031066 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-031066            │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-880603 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-880603           │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │                     │
	│ stop    │ -p embed-certs-880603 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-880603           │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │                     │
	│ image   │ old-k8s-version-951975 image list --format=json                                                                                                                                                                                               │ old-k8s-version-951975       │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ pause   │ -p old-k8s-version-951975 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-951975       │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │                     │
	│ delete  │ -p old-k8s-version-951975                                                                                                                                                                                                                     │ old-k8s-version-951975       │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ delete  │ -p old-k8s-version-951975                                                                                                                                                                                                                     │ old-k8s-version-951975       │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ start   │ -p newest-cni-444637 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-444637            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-986220 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-986220 │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 09:17:08
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 09:17:08.980062  317552 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:17:08.980385  317552 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:17:08.980398  317552 out.go:374] Setting ErrFile to fd 2...
	I1018 09:17:08.980405  317552 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:17:08.980634  317552 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-5897/.minikube/bin
	I1018 09:17:08.981169  317552 out.go:368] Setting JSON to false
	I1018 09:17:08.982466  317552 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3577,"bootTime":1760775452,"procs":303,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 09:17:08.982573  317552 start.go:141] virtualization: kvm guest
	I1018 09:17:08.984615  317552 out.go:179] * [newest-cni-444637] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 09:17:08.985825  317552 out.go:179]   - MINIKUBE_LOCATION=21767
	I1018 09:17:08.985869  317552 notify.go:220] Checking for updates...
	I1018 09:17:08.988082  317552 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:17:08.989427  317552 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-5897/kubeconfig
	I1018 09:17:08.990931  317552 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-5897/.minikube
	I1018 09:17:08.992201  317552 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 09:17:08.993504  317552 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:17:08.995300  317552 config.go:182] Loaded profile config "default-k8s-diff-port-986220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:17:08.995422  317552 config.go:182] Loaded profile config "embed-certs-880603": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:17:08.995535  317552 config.go:182] Loaded profile config "no-preload-031066": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:17:08.995625  317552 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:17:09.021850  317552 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 09:17:09.021947  317552 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:17:09.083608  317552 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-18 09:17:09.073381026 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:17:09.083724  317552 docker.go:318] overlay module found
	I1018 09:17:09.085705  317552 out.go:179] * Using the docker driver based on user configuration
	I1018 09:17:09.086992  317552 start.go:305] selected driver: docker
	I1018 09:17:09.087007  317552 start.go:925] validating driver "docker" against <nil>
	I1018 09:17:09.087019  317552 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:17:09.087682  317552 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:17:09.151145  317552 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-18 09:17:09.1384969 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:17:09.151432  317552 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1018 09:17:09.151476  317552 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1018 09:17:09.151746  317552 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 09:17:09.154458  317552 out.go:179] * Using Docker driver with root privileges
	I1018 09:17:09.155888  317552 cni.go:84] Creating CNI manager for ""
	I1018 09:17:09.155982  317552 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:17:09.155997  317552 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 09:17:09.156073  317552 start.go:349] cluster config:
	{Name:newest-cni-444637 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-444637 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:17:09.157406  317552 out.go:179] * Starting "newest-cni-444637" primary control-plane node in "newest-cni-444637" cluster
	I1018 09:17:09.158699  317552 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 09:17:09.159992  317552 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 09:17:09.161521  317552 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:17:09.161576  317552 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-5897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 09:17:09.161593  317552 cache.go:58] Caching tarball of preloaded images
	I1018 09:17:09.161650  317552 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 09:17:09.161711  317552 preload.go:233] Found /home/jenkins/minikube-integration/21767-5897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 09:17:09.161728  317552 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 09:17:09.161883  317552 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/config.json ...
	I1018 09:17:09.161913  317552 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/config.json: {Name:mk65454167e7645600f7f87c3644877d6a8a1717 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:17:09.185398  317552 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 09:17:09.185422  317552 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 09:17:09.185444  317552 cache.go:232] Successfully downloaded all kic artifacts
	I1018 09:17:09.185479  317552 start.go:360] acquireMachinesLock for newest-cni-444637: {Name:mkf6974ca6fc7b22cdf212b383f50d3f090ea59b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:17:09.185610  317552 start.go:364] duration metric: took 101.71µs to acquireMachinesLock for "newest-cni-444637"
	I1018 09:17:09.185655  317552 start.go:93] Provisioning new machine with config: &{Name:newest-cni-444637 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-444637 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:17:09.185737  317552 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Oct 18 09:16:58 default-k8s-diff-port-986220 crio[778]: time="2025-10-18T09:16:58.405975035Z" level=info msg="Starting container: a09d7ccaedddf61fe12189e58cc9ed74adb4a3a0a17fc89263d5c1899674081e" id=be39608a-b51d-4fc0-b7f3-6ea5173f1a4f name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:16:58 default-k8s-diff-port-986220 crio[778]: time="2025-10-18T09:16:58.408110585Z" level=info msg="Started container" PID=1862 containerID=a09d7ccaedddf61fe12189e58cc9ed74adb4a3a0a17fc89263d5c1899674081e description=kube-system/coredns-66bc5c9577-bpcsk/coredns id=be39608a-b51d-4fc0-b7f3-6ea5173f1a4f name=/runtime.v1.RuntimeService/StartContainer sandboxID=e4c9411a2b23fcd218d0cebbc93d666b69e5740b188c6907498397a829c36afa
	Oct 18 09:17:01 default-k8s-diff-port-986220 crio[778]: time="2025-10-18T09:17:01.444810333Z" level=info msg="Running pod sandbox: default/busybox/POD" id=146f8709-7a19-4ab2-9c4a-3a5c300d4ba2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:17:01 default-k8s-diff-port-986220 crio[778]: time="2025-10-18T09:17:01.44490995Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:17:01 default-k8s-diff-port-986220 crio[778]: time="2025-10-18T09:17:01.450141132Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:8af25ac2f664198a53666f0e828c4381d5e1bb80708fea99ff72ee6ce0f5f8e8 UID:335a5ad4-0ec1-49da-9c93-b12fad5660a4 NetNS:/var/run/netns/d7835cbf-b678-4f86-8393-8891b5c6171c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0003c0478}] Aliases:map[]}"
	Oct 18 09:17:01 default-k8s-diff-port-986220 crio[778]: time="2025-10-18T09:17:01.450171598Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 18 09:17:01 default-k8s-diff-port-986220 crio[778]: time="2025-10-18T09:17:01.459757721Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:8af25ac2f664198a53666f0e828c4381d5e1bb80708fea99ff72ee6ce0f5f8e8 UID:335a5ad4-0ec1-49da-9c93-b12fad5660a4 NetNS:/var/run/netns/d7835cbf-b678-4f86-8393-8891b5c6171c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0003c0478}] Aliases:map[]}"
	Oct 18 09:17:01 default-k8s-diff-port-986220 crio[778]: time="2025-10-18T09:17:01.459886975Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 18 09:17:01 default-k8s-diff-port-986220 crio[778]: time="2025-10-18T09:17:01.460661489Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 18 09:17:01 default-k8s-diff-port-986220 crio[778]: time="2025-10-18T09:17:01.461431981Z" level=info msg="Ran pod sandbox 8af25ac2f664198a53666f0e828c4381d5e1bb80708fea99ff72ee6ce0f5f8e8 with infra container: default/busybox/POD" id=146f8709-7a19-4ab2-9c4a-3a5c300d4ba2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:17:01 default-k8s-diff-port-986220 crio[778]: time="2025-10-18T09:17:01.462753285Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f2a4699d-7da0-4e8e-9bf0-59a3dd42dfc3 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:17:01 default-k8s-diff-port-986220 crio[778]: time="2025-10-18T09:17:01.46291329Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=f2a4699d-7da0-4e8e-9bf0-59a3dd42dfc3 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:17:01 default-k8s-diff-port-986220 crio[778]: time="2025-10-18T09:17:01.462952548Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=f2a4699d-7da0-4e8e-9bf0-59a3dd42dfc3 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:17:01 default-k8s-diff-port-986220 crio[778]: time="2025-10-18T09:17:01.463757566Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9aad99ec-cda8-4ce4-ace5-852d77e276c1 name=/runtime.v1.ImageService/PullImage
	Oct 18 09:17:01 default-k8s-diff-port-986220 crio[778]: time="2025-10-18T09:17:01.465671367Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 18 09:17:02 default-k8s-diff-port-986220 crio[778]: time="2025-10-18T09:17:02.212517851Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=9aad99ec-cda8-4ce4-ace5-852d77e276c1 name=/runtime.v1.ImageService/PullImage
	Oct 18 09:17:02 default-k8s-diff-port-986220 crio[778]: time="2025-10-18T09:17:02.213478183Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=03eece60-0d8c-413b-a9c3-c608de3997ba name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:17:02 default-k8s-diff-port-986220 crio[778]: time="2025-10-18T09:17:02.215137489Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ccd2a46a-86bb-40e3-ad35-de3d2d28b684 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:17:02 default-k8s-diff-port-986220 crio[778]: time="2025-10-18T09:17:02.219865636Z" level=info msg="Creating container: default/busybox/busybox" id=bfc00b66-43f2-4ed0-a129-2db04cf9b553 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:17:02 default-k8s-diff-port-986220 crio[778]: time="2025-10-18T09:17:02.220731238Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:17:02 default-k8s-diff-port-986220 crio[778]: time="2025-10-18T09:17:02.225076862Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:17:02 default-k8s-diff-port-986220 crio[778]: time="2025-10-18T09:17:02.225715788Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:17:02 default-k8s-diff-port-986220 crio[778]: time="2025-10-18T09:17:02.253976628Z" level=info msg="Created container 0d80ca5ebb612c10eab53d6e3bc44cd79d86b5aecf0ba70b17305f2081f5576f: default/busybox/busybox" id=bfc00b66-43f2-4ed0-a129-2db04cf9b553 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:17:02 default-k8s-diff-port-986220 crio[778]: time="2025-10-18T09:17:02.25477396Z" level=info msg="Starting container: 0d80ca5ebb612c10eab53d6e3bc44cd79d86b5aecf0ba70b17305f2081f5576f" id=be1a801a-1915-4221-8cbb-30ef54c8c3ba name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:17:02 default-k8s-diff-port-986220 crio[778]: time="2025-10-18T09:17:02.25689778Z" level=info msg="Started container" PID=1935 containerID=0d80ca5ebb612c10eab53d6e3bc44cd79d86b5aecf0ba70b17305f2081f5576f description=default/busybox/busybox id=be1a801a-1915-4221-8cbb-30ef54c8c3ba name=/runtime.v1.RuntimeService/StartContainer sandboxID=8af25ac2f664198a53666f0e828c4381d5e1bb80708fea99ff72ee6ce0f5f8e8
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	0d80ca5ebb612       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   8af25ac2f6641       busybox                                                default
	a09d7ccaedddf       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago      Running             coredns                   0                   e4c9411a2b23f       coredns-66bc5c9577-bpcsk                               kube-system
	daba9b14782f0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   3306b6d891515       storage-provisioner                                    kube-system
	3082b5fdc996c       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      23 seconds ago      Running             kindnet-cni               0                   982dbc90f1b2c       kindnet-cj6bv                                          kube-system
	e187913af019d       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      23 seconds ago      Running             kube-proxy                0                   d8d962582926f       kube-proxy-vvtpl                                       kube-system
	465f93307e657       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      34 seconds ago      Running             kube-scheduler            0                   855a5bfca57e7       kube-scheduler-default-k8s-diff-port-986220            kube-system
	7d67c01df0f6c       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      34 seconds ago      Running             kube-controller-manager   0                   b2d73dcfb2bb9       kube-controller-manager-default-k8s-diff-port-986220   kube-system
	cd47839229372       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      34 seconds ago      Running             kube-apiserver            0                   3e972fd101fc3       kube-apiserver-default-k8s-diff-port-986220            kube-system
	babe0bfbe169f       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      34 seconds ago      Running             etcd                      0                   1761a8a33bb3c       etcd-default-k8s-diff-port-986220                      kube-system
	
	
	==> coredns [a09d7ccaedddf61fe12189e58cc9ed74adb4a3a0a17fc89263d5c1899674081e] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44552 - 23640 "HINFO IN 5341705815159014165.8916908469808647731. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.07060006s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-986220
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-986220
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a39cecdc22b5fb611b15c7501c7459c3b4d2820
	                    minikube.k8s.io/name=default-k8s-diff-port-986220
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T09_16_42_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 09:16:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-986220
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 09:17:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 09:16:58 +0000   Sat, 18 Oct 2025 09:16:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 09:16:58 +0000   Sat, 18 Oct 2025 09:16:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 09:16:58 +0000   Sat, 18 Oct 2025 09:16:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 09:16:58 +0000   Sat, 18 Oct 2025 09:16:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    default-k8s-diff-port-986220
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                f86ae77e-f46d-47da-846c-c937a0a7701a
	  Boot ID:                    e8d7ef1f-87bb-488c-8381-e18fe85b484f
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-bpcsk                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-default-k8s-diff-port-986220                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-cj6bv                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-default-k8s-diff-port-986220             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-986220    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-vvtpl                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-default-k8s-diff-port-986220             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23s                kube-proxy       
	  Normal  NodeHasSufficientMemory  36s (x8 over 37s)  kubelet          Node default-k8s-diff-port-986220 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s (x8 over 37s)  kubelet          Node default-k8s-diff-port-986220 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s (x8 over 37s)  kubelet          Node default-k8s-diff-port-986220 status is now: NodeHasSufficientPID
	  Normal  Starting                 30s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s                kubelet          Node default-k8s-diff-port-986220 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s                kubelet          Node default-k8s-diff-port-986220 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s                kubelet          Node default-k8s-diff-port-986220 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s                node-controller  Node default-k8s-diff-port-986220 event: Registered Node default-k8s-diff-port-986220 in Controller
	  Normal  NodeReady                13s                kubelet          Node default-k8s-diff-port-986220 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3e 73 e0 1e 4f 5c 08 06
	[  +0.001176] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9e 01 6a be c1 ed 08 06
	[  +1.096145] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 92 07 d0 c5 bc 08 06
	[  +0.000393] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff de 8d 0a a3 cc 78 08 06
	[ +17.591772] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 8a 16 36 e8 43 c0 08 06
	[  +0.000413] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3e 73 e0 1e 4f 5c 08 06
	[ +11.820741] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 36 5b 8e 46 ea 47 08 06
	[Oct18 09:15] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 28 86 11 d4 e9 08 06
	[  +0.032974] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 76 2d 83 26 2e 28 08 06
	[  +4.435535] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 22 e2 07 5a 3b 4a 08 06
	[  +0.000382] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 36 5b 8e 46 ea 47 08 06
	[ +43.809014] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 22 6f 4b 2b 7f 46 08 06
	[  +0.000367] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 62 28 86 11 d4 e9 08 06
	
	
	==> etcd [babe0bfbe169f9703bbba06a9ac3481800fb3b9c2ddcdc2290e37edf3d621446] <==
	{"level":"warn","ts":"2025-10-18T09:16:38.010546Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:38.018166Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:38.028878Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:38.037957Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:38.046033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:38.054950Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:38.064312Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:38.076612Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:38.093262Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:38.101708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:38.112484Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:38.119384Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:38.127129Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:38.136102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:38.144571Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:38.153596Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:38.162724Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:38.172050Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:38.180172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:38.188607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:38.196337Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:38.211441Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:38.219898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:38.228164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:38.300542Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50098","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:17:11 up 59 min,  0 user,  load average: 3.59, 3.44, 2.37
	Linux default-k8s-diff-port-986220 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3082b5fdc996c1f8c48dcc37c22037605db07c1dd83f8722d9461a8b3c8e373c] <==
	I1018 09:16:47.469081       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 09:16:47.469431       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1018 09:16:47.469675       1 main.go:148] setting mtu 1500 for CNI 
	I1018 09:16:47.469694       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 09:16:47.469717       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T09:16:47Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 09:16:47.673904       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 09:16:47.673971       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 09:16:47.673992       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 09:16:47.731132       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 09:16:48.076272       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 09:16:48.076465       1 metrics.go:72] Registering metrics
	I1018 09:16:48.076592       1 controller.go:711] "Syncing nftables rules"
	I1018 09:16:57.676480       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1018 09:16:57.676560       1 main.go:301] handling current node
	I1018 09:17:07.677462       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1018 09:17:07.677511       1 main.go:301] handling current node
	
	
	==> kube-apiserver [cd47839229372576f764327841ef22c5a7e8508b688d16b2806a58d6ee41c0d4] <==
	I1018 09:16:38.933944       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:16:38.934857       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1018 09:16:38.943675       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 09:16:38.943742       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1018 09:16:38.943841       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:16:38.960954       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1018 09:16:39.126805       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 09:16:39.832965       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1018 09:16:39.839533       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1018 09:16:39.839561       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 09:16:40.525568       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 09:16:40.576065       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 09:16:40.638078       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1018 09:16:40.644895       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1018 09:16:40.646307       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 09:16:40.656865       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 09:16:40.866290       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 09:16:41.931729       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 09:16:41.941856       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1018 09:16:41.950392       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 09:16:46.168499       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:16:46.174526       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:16:46.725955       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1018 09:16:46.970035       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1018 09:17:09.239130       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8444->192.168.94.1:59116: use of closed network connection
	
	
	==> kube-controller-manager [7d67c01df0f6cf5fffc513e9908aece05ed154d9ea8a50553fc4ce4bff4c4ee4] <==
	I1018 09:16:45.753768       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 09:16:45.763950       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 09:16:45.763954       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 09:16:45.765056       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 09:16:45.765076       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 09:16:45.765094       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1018 09:16:45.765145       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1018 09:16:45.765175       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1018 09:16:45.765189       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1018 09:16:45.765191       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 09:16:45.765215       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 09:16:45.765222       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1018 09:16:45.765413       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 09:16:45.765827       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 09:16:45.767008       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 09:16:45.768223       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:16:45.771419       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:16:45.781579       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 09:16:45.781628       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 09:16:45.825744       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1018 09:16:45.863736       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 09:16:45.863794       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 09:16:45.863802       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 09:16:45.926115       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 09:17:00.741931       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [e187913af019d89ed1d34315616e5431807adc69b0c346b34d17a0011bd7ac80] <==
	I1018 09:16:47.252213       1 server_linux.go:53] "Using iptables proxy"
	I1018 09:16:47.312490       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 09:16:47.413463       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 09:16:47.413507       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1018 09:16:47.413624       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 09:16:47.436438       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 09:16:47.436505       1 server_linux.go:132] "Using iptables Proxier"
	I1018 09:16:47.443216       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 09:16:47.444660       1 server.go:527] "Version info" version="v1.34.1"
	I1018 09:16:47.444690       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:16:47.446774       1 config.go:106] "Starting endpoint slice config controller"
	I1018 09:16:47.447308       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 09:16:47.446879       1 config.go:200] "Starting service config controller"
	I1018 09:16:47.447438       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 09:16:47.446912       1 config.go:309] "Starting node config controller"
	I1018 09:16:47.447455       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 09:16:47.447462       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 09:16:47.447134       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 09:16:47.447473       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 09:16:47.548221       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 09:16:47.548232       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 09:16:47.548260       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [465f93307e65766db31a4d713f9019b6fcd2910dbde37ba4ba2c5ad0f7b77faf] <==
	E1018 09:16:38.890786       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 09:16:38.890801       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 09:16:38.890928       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 09:16:38.891004       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 09:16:38.891172       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 09:16:38.891224       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 09:16:39.698100       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 09:16:39.708189       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 09:16:39.726189       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 09:16:39.764011       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 09:16:39.816807       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 09:16:39.850965       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1018 09:16:39.868997       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 09:16:39.877599       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 09:16:39.932401       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 09:16:39.971793       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 09:16:39.996192       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 09:16:40.032614       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 09:16:40.109506       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 09:16:40.144053       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 09:16:40.161547       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 09:16:40.208586       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 09:16:40.230405       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 09:16:40.263053       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1018 09:16:41.586930       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 09:16:42 default-k8s-diff-port-986220 kubelet[1341]: E1018 09:16:42.852774    1341 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-default-k8s-diff-port-986220\" already exists" pod="kube-system/kube-controller-manager-default-k8s-diff-port-986220"
	Oct 18 09:16:42 default-k8s-diff-port-986220 kubelet[1341]: I1018 09:16:42.876613    1341 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-986220" podStartSLOduration=1.8765899350000002 podStartE2EDuration="1.876589935s" podCreationTimestamp="2025-10-18 09:16:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:16:42.874077036 +0000 UTC m=+1.170249231" watchObservedRunningTime="2025-10-18 09:16:42.876589935 +0000 UTC m=+1.172762140"
	Oct 18 09:16:42 default-k8s-diff-port-986220 kubelet[1341]: I1018 09:16:42.888727    1341 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-986220" podStartSLOduration=1.888706965 podStartE2EDuration="1.888706965s" podCreationTimestamp="2025-10-18 09:16:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:16:42.888549072 +0000 UTC m=+1.184721274" watchObservedRunningTime="2025-10-18 09:16:42.888706965 +0000 UTC m=+1.184879165"
	Oct 18 09:16:42 default-k8s-diff-port-986220 kubelet[1341]: I1018 09:16:42.898883    1341 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-986220" podStartSLOduration=1.898862613 podStartE2EDuration="1.898862613s" podCreationTimestamp="2025-10-18 09:16:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:16:42.898681078 +0000 UTC m=+1.194853280" watchObservedRunningTime="2025-10-18 09:16:42.898862613 +0000 UTC m=+1.195034814"
	Oct 18 09:16:42 default-k8s-diff-port-986220 kubelet[1341]: I1018 09:16:42.924264    1341 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-986220" podStartSLOduration=1.9242414129999998 podStartE2EDuration="1.924241413s" podCreationTimestamp="2025-10-18 09:16:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:16:42.911367358 +0000 UTC m=+1.207539557" watchObservedRunningTime="2025-10-18 09:16:42.924241413 +0000 UTC m=+1.220413616"
	Oct 18 09:16:45 default-k8s-diff-port-986220 kubelet[1341]: I1018 09:16:45.759716    1341 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 18 09:16:45 default-k8s-diff-port-986220 kubelet[1341]: I1018 09:16:45.760527    1341 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 18 09:16:46 default-k8s-diff-port-986220 kubelet[1341]: I1018 09:16:46.831764    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3be57d5a-db16-4280-936c-af1a1e022017-lib-modules\") pod \"kube-proxy-vvtpl\" (UID: \"3be57d5a-db16-4280-936c-af1a1e022017\") " pod="kube-system/kube-proxy-vvtpl"
	Oct 18 09:16:46 default-k8s-diff-port-986220 kubelet[1341]: I1018 09:16:46.831806    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p74qz\" (UniqueName: \"kubernetes.io/projected/b21a6117-74a1-4a94-9dc4-3ba0856e6712-kube-api-access-p74qz\") pod \"kindnet-cj6bv\" (UID: \"b21a6117-74a1-4a94-9dc4-3ba0856e6712\") " pod="kube-system/kindnet-cj6bv"
	Oct 18 09:16:46 default-k8s-diff-port-986220 kubelet[1341]: I1018 09:16:46.831834    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhrpw\" (UniqueName: \"kubernetes.io/projected/3be57d5a-db16-4280-936c-af1a1e022017-kube-api-access-rhrpw\") pod \"kube-proxy-vvtpl\" (UID: \"3be57d5a-db16-4280-936c-af1a1e022017\") " pod="kube-system/kube-proxy-vvtpl"
	Oct 18 09:16:46 default-k8s-diff-port-986220 kubelet[1341]: I1018 09:16:46.831872    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b21a6117-74a1-4a94-9dc4-3ba0856e6712-xtables-lock\") pod \"kindnet-cj6bv\" (UID: \"b21a6117-74a1-4a94-9dc4-3ba0856e6712\") " pod="kube-system/kindnet-cj6bv"
	Oct 18 09:16:46 default-k8s-diff-port-986220 kubelet[1341]: I1018 09:16:46.831916    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3be57d5a-db16-4280-936c-af1a1e022017-kube-proxy\") pod \"kube-proxy-vvtpl\" (UID: \"3be57d5a-db16-4280-936c-af1a1e022017\") " pod="kube-system/kube-proxy-vvtpl"
	Oct 18 09:16:46 default-k8s-diff-port-986220 kubelet[1341]: I1018 09:16:46.831969    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b21a6117-74a1-4a94-9dc4-3ba0856e6712-lib-modules\") pod \"kindnet-cj6bv\" (UID: \"b21a6117-74a1-4a94-9dc4-3ba0856e6712\") " pod="kube-system/kindnet-cj6bv"
	Oct 18 09:16:46 default-k8s-diff-port-986220 kubelet[1341]: I1018 09:16:46.832035    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3be57d5a-db16-4280-936c-af1a1e022017-xtables-lock\") pod \"kube-proxy-vvtpl\" (UID: \"3be57d5a-db16-4280-936c-af1a1e022017\") " pod="kube-system/kube-proxy-vvtpl"
	Oct 18 09:16:46 default-k8s-diff-port-986220 kubelet[1341]: I1018 09:16:46.832059    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/b21a6117-74a1-4a94-9dc4-3ba0856e6712-cni-cfg\") pod \"kindnet-cj6bv\" (UID: \"b21a6117-74a1-4a94-9dc4-3ba0856e6712\") " pod="kube-system/kindnet-cj6bv"
	Oct 18 09:16:47 default-k8s-diff-port-986220 kubelet[1341]: I1018 09:16:47.880408    1341 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vvtpl" podStartSLOduration=1.8803807030000002 podStartE2EDuration="1.880380703s" podCreationTimestamp="2025-10-18 09:16:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:16:47.880195246 +0000 UTC m=+6.176367442" watchObservedRunningTime="2025-10-18 09:16:47.880380703 +0000 UTC m=+6.176552923"
	Oct 18 09:16:47 default-k8s-diff-port-986220 kubelet[1341]: I1018 09:16:47.880566    1341 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-cj6bv" podStartSLOduration=1.8805525429999999 podStartE2EDuration="1.880552543s" podCreationTimestamp="2025-10-18 09:16:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:16:47.868665278 +0000 UTC m=+6.164837481" watchObservedRunningTime="2025-10-18 09:16:47.880552543 +0000 UTC m=+6.176724745"
	Oct 18 09:16:58 default-k8s-diff-port-986220 kubelet[1341]: I1018 09:16:58.026955    1341 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 18 09:16:58 default-k8s-diff-port-986220 kubelet[1341]: I1018 09:16:58.110032    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p5sw\" (UniqueName: \"kubernetes.io/projected/6b5391e1-9c35-460f-b52d-8d434084db0e-kube-api-access-7p5sw\") pod \"storage-provisioner\" (UID: \"6b5391e1-9c35-460f-b52d-8d434084db0e\") " pod="kube-system/storage-provisioner"
	Oct 18 09:16:58 default-k8s-diff-port-986220 kubelet[1341]: I1018 09:16:58.110088    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6b5391e1-9c35-460f-b52d-8d434084db0e-tmp\") pod \"storage-provisioner\" (UID: \"6b5391e1-9c35-460f-b52d-8d434084db0e\") " pod="kube-system/storage-provisioner"
	Oct 18 09:16:58 default-k8s-diff-port-986220 kubelet[1341]: I1018 09:16:58.110125    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d89ef1c8-1a4b-41b5-9ecf-66daaae426ba-config-volume\") pod \"coredns-66bc5c9577-bpcsk\" (UID: \"d89ef1c8-1a4b-41b5-9ecf-66daaae426ba\") " pod="kube-system/coredns-66bc5c9577-bpcsk"
	Oct 18 09:16:58 default-k8s-diff-port-986220 kubelet[1341]: I1018 09:16:58.110152    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjvbt\" (UniqueName: \"kubernetes.io/projected/d89ef1c8-1a4b-41b5-9ecf-66daaae426ba-kube-api-access-wjvbt\") pod \"coredns-66bc5c9577-bpcsk\" (UID: \"d89ef1c8-1a4b-41b5-9ecf-66daaae426ba\") " pod="kube-system/coredns-66bc5c9577-bpcsk"
	Oct 18 09:16:58 default-k8s-diff-port-986220 kubelet[1341]: I1018 09:16:58.899380    1341 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-bpcsk" podStartSLOduration=11.899317177 podStartE2EDuration="11.899317177s" podCreationTimestamp="2025-10-18 09:16:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:16:58.888453638 +0000 UTC m=+17.184625840" watchObservedRunningTime="2025-10-18 09:16:58.899317177 +0000 UTC m=+17.195489378"
	Oct 18 09:16:58 default-k8s-diff-port-986220 kubelet[1341]: I1018 09:16:58.910315    1341 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=11.91029352 podStartE2EDuration="11.91029352s" podCreationTimestamp="2025-10-18 09:16:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:16:58.899605058 +0000 UTC m=+17.195777272" watchObservedRunningTime="2025-10-18 09:16:58.91029352 +0000 UTC m=+17.206465722"
	Oct 18 09:17:01 default-k8s-diff-port-986220 kubelet[1341]: I1018 09:17:01.230831    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9zh7\" (UniqueName: \"kubernetes.io/projected/335a5ad4-0ec1-49da-9c93-b12fad5660a4-kube-api-access-f9zh7\") pod \"busybox\" (UID: \"335a5ad4-0ec1-49da-9c93-b12fad5660a4\") " pod="default/busybox"
	
	
	==> storage-provisioner [daba9b14782f03a8d4981556ad81ddd3dda706623d7e62eb63fc3de8c70db89d] <==
	I1018 09:16:58.414940       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 09:16:58.424298       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 09:16:58.424375       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 09:16:58.426857       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:16:58.432715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 09:16:58.432971       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 09:16:58.433106       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b90e1b13-d855-40dd-8fdf-9ac19eb23314", APIVersion:"v1", ResourceVersion:"442", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-986220_7f6ecace-9e3b-49e3-8de0-0c40a2458381 became leader
	I1018 09:16:58.433163       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-986220_7f6ecace-9e3b-49e3-8de0-0c40a2458381!
	W1018 09:16:58.435865       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:16:58.440523       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 09:16:58.533625       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-986220_7f6ecace-9e3b-49e3-8de0-0c40a2458381!
	W1018 09:17:00.444433       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:00.450321       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:02.454132       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:02.459858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:04.463285       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:04.468789       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:06.471756       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:06.476402       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:08.480096       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:08.485064       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:10.488192       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:10.504064       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-986220 -n default-k8s-diff-port-986220
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-986220 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.60s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (7.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-031066 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-031066 --alsologtostderr -v=1: exit status 80 (2.058727022s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-031066 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:17:25.041147  323134 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:17:25.041468  323134 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:17:25.041481  323134 out.go:374] Setting ErrFile to fd 2...
	I1018 09:17:25.041487  323134 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:17:25.041700  323134 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-5897/.minikube/bin
	I1018 09:17:25.041983  323134 out.go:368] Setting JSON to false
	I1018 09:17:25.042030  323134 mustload.go:65] Loading cluster: no-preload-031066
	I1018 09:17:25.042408  323134 config.go:182] Loaded profile config "no-preload-031066": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:17:25.042842  323134 cli_runner.go:164] Run: docker container inspect no-preload-031066 --format={{.State.Status}}
	I1018 09:17:25.062846  323134 host.go:66] Checking if "no-preload-031066" exists ...
	I1018 09:17:25.063272  323134 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:17:25.123971  323134 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:90 SystemTime:2025-10-18 09:17:25.113095582 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:17:25.124794  323134 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-031066 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1018 09:17:25.126930  323134 out.go:179] * Pausing node no-preload-031066 ... 
	I1018 09:17:25.128310  323134 host.go:66] Checking if "no-preload-031066" exists ...
	I1018 09:17:25.128633  323134 ssh_runner.go:195] Run: systemctl --version
	I1018 09:17:25.128686  323134 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-031066
	I1018 09:17:25.149541  323134 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/no-preload-031066/id_rsa Username:docker}
	I1018 09:17:25.256426  323134 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:17:25.280418  323134 pause.go:52] kubelet running: true
	I1018 09:17:25.280510  323134 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 09:17:25.517996  323134 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 09:17:25.518095  323134 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 09:17:25.630958  323134 cri.go:89] found id: "75ce77572e8bf3a989ac18d086a9f8cfbaae21b8f7296e1a9244cd17d037a2e0"
	I1018 09:17:25.630983  323134 cri.go:89] found id: "2dc50ea4d70ff173a36533c407b63be08c1ba2f027b2e06301f77dc0a6e2fb65"
	I1018 09:17:25.630989  323134 cri.go:89] found id: "8935165d381c76e0adbf1b4796ec6dacb8a681c43afb77e2bac74597041759ac"
	I1018 09:17:25.630993  323134 cri.go:89] found id: "c605782e4c42eb91b018538170fe921def3b1396402bc03eee1dbce4f4af6a69"
	I1018 09:17:25.630998  323134 cri.go:89] found id: "703ddbb4126b1f1be32f6c0f727cee37f39cb83a922d2a063922f5b314414d37"
	I1018 09:17:25.631002  323134 cri.go:89] found id: "153dd41ff60f495d247d4bd42054dd9255c2fe5ccbc173f31021152a50b30308"
	I1018 09:17:25.631006  323134 cri.go:89] found id: "b51a1224ef6b876bc35ce20f2366f94525e300e4432dff8348abbde915ade5af"
	I1018 09:17:25.631010  323134 cri.go:89] found id: "62682de07bbfeb0d0f0c6405121566236410d571314651f369c15f65938b548a"
	I1018 09:17:25.631014  323134 cri.go:89] found id: "db536597b2746191742cfa1b8df28f2fe3935b9a553d5543f993db2773c9f6a1"
	I1018 09:17:25.631022  323134 cri.go:89] found id: "b1fe4f4a7ee10d934d38d0876966de29079cb3ec3001c753475493045aa346b0"
	I1018 09:17:25.631026  323134 cri.go:89] found id: "e718435c8ea3ef9ed304f9cc405a3feced7a46aa8145c5c913dda9eee2bbfb61"
	I1018 09:17:25.631029  323134 cri.go:89] found id: ""
	I1018 09:17:25.631071  323134 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:17:25.648863  323134 retry.go:31] will retry after 172.904562ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:17:25Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:17:25.822332  323134 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:17:25.840992  323134 pause.go:52] kubelet running: false
	I1018 09:17:25.841051  323134 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 09:17:26.067522  323134 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 09:17:26.067611  323134 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 09:17:26.167990  323134 cri.go:89] found id: "75ce77572e8bf3a989ac18d086a9f8cfbaae21b8f7296e1a9244cd17d037a2e0"
	I1018 09:17:26.168020  323134 cri.go:89] found id: "2dc50ea4d70ff173a36533c407b63be08c1ba2f027b2e06301f77dc0a6e2fb65"
	I1018 09:17:26.168026  323134 cri.go:89] found id: "8935165d381c76e0adbf1b4796ec6dacb8a681c43afb77e2bac74597041759ac"
	I1018 09:17:26.168030  323134 cri.go:89] found id: "c605782e4c42eb91b018538170fe921def3b1396402bc03eee1dbce4f4af6a69"
	I1018 09:17:26.168044  323134 cri.go:89] found id: "703ddbb4126b1f1be32f6c0f727cee37f39cb83a922d2a063922f5b314414d37"
	I1018 09:17:26.168049  323134 cri.go:89] found id: "153dd41ff60f495d247d4bd42054dd9255c2fe5ccbc173f31021152a50b30308"
	I1018 09:17:26.168053  323134 cri.go:89] found id: "b51a1224ef6b876bc35ce20f2366f94525e300e4432dff8348abbde915ade5af"
	I1018 09:17:26.168057  323134 cri.go:89] found id: "62682de07bbfeb0d0f0c6405121566236410d571314651f369c15f65938b548a"
	I1018 09:17:26.168061  323134 cri.go:89] found id: "db536597b2746191742cfa1b8df28f2fe3935b9a553d5543f993db2773c9f6a1"
	I1018 09:17:26.168069  323134 cri.go:89] found id: "b1fe4f4a7ee10d934d38d0876966de29079cb3ec3001c753475493045aa346b0"
	I1018 09:17:26.168077  323134 cri.go:89] found id: "e718435c8ea3ef9ed304f9cc405a3feced7a46aa8145c5c913dda9eee2bbfb61"
	I1018 09:17:26.168081  323134 cri.go:89] found id: ""
	I1018 09:17:26.168157  323134 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:17:26.182132  323134 retry.go:31] will retry after 462.289701ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:17:26Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:17:26.644655  323134 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:17:26.661461  323134 pause.go:52] kubelet running: false
	I1018 09:17:26.661567  323134 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 09:17:26.895841  323134 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 09:17:26.895967  323134 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 09:17:27.004054  323134 cri.go:89] found id: "75ce77572e8bf3a989ac18d086a9f8cfbaae21b8f7296e1a9244cd17d037a2e0"
	I1018 09:17:27.004081  323134 cri.go:89] found id: "2dc50ea4d70ff173a36533c407b63be08c1ba2f027b2e06301f77dc0a6e2fb65"
	I1018 09:17:27.004087  323134 cri.go:89] found id: "8935165d381c76e0adbf1b4796ec6dacb8a681c43afb77e2bac74597041759ac"
	I1018 09:17:27.004091  323134 cri.go:89] found id: "c605782e4c42eb91b018538170fe921def3b1396402bc03eee1dbce4f4af6a69"
	I1018 09:17:27.004096  323134 cri.go:89] found id: "703ddbb4126b1f1be32f6c0f727cee37f39cb83a922d2a063922f5b314414d37"
	I1018 09:17:27.004101  323134 cri.go:89] found id: "153dd41ff60f495d247d4bd42054dd9255c2fe5ccbc173f31021152a50b30308"
	I1018 09:17:27.004106  323134 cri.go:89] found id: "b51a1224ef6b876bc35ce20f2366f94525e300e4432dff8348abbde915ade5af"
	I1018 09:17:27.004110  323134 cri.go:89] found id: "62682de07bbfeb0d0f0c6405121566236410d571314651f369c15f65938b548a"
	I1018 09:17:27.004114  323134 cri.go:89] found id: "db536597b2746191742cfa1b8df28f2fe3935b9a553d5543f993db2773c9f6a1"
	I1018 09:17:27.004123  323134 cri.go:89] found id: "b1fe4f4a7ee10d934d38d0876966de29079cb3ec3001c753475493045aa346b0"
	I1018 09:17:27.004141  323134 cri.go:89] found id: "e718435c8ea3ef9ed304f9cc405a3feced7a46aa8145c5c913dda9eee2bbfb61"
	I1018 09:17:27.004147  323134 cri.go:89] found id: ""
	I1018 09:17:27.004195  323134 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:17:27.025794  323134 out.go:203] 
	W1018 09:17:27.028049  323134 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:17:27Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:17:27Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 09:17:27.028072  323134 out.go:285] * 
	* 
	W1018 09:17:27.034855  323134 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 09:17:27.038470  323134 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-031066 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-031066
helpers_test.go:243: (dbg) docker inspect no-preload-031066:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dce899f902aef3d6f89585b10fccab0c498a8e85a102773c30f2d6dc5ea3fab0",
	        "Created": "2025-10-18T09:14:59.840380685Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 309639,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T09:16:21.509995917Z",
	            "FinishedAt": "2025-10-18T09:16:20.67977806Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/dce899f902aef3d6f89585b10fccab0c498a8e85a102773c30f2d6dc5ea3fab0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dce899f902aef3d6f89585b10fccab0c498a8e85a102773c30f2d6dc5ea3fab0/hostname",
	        "HostsPath": "/var/lib/docker/containers/dce899f902aef3d6f89585b10fccab0c498a8e85a102773c30f2d6dc5ea3fab0/hosts",
	        "LogPath": "/var/lib/docker/containers/dce899f902aef3d6f89585b10fccab0c498a8e85a102773c30f2d6dc5ea3fab0/dce899f902aef3d6f89585b10fccab0c498a8e85a102773c30f2d6dc5ea3fab0-json.log",
	        "Name": "/no-preload-031066",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-031066:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-031066",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dce899f902aef3d6f89585b10fccab0c498a8e85a102773c30f2d6dc5ea3fab0",
	                "LowerDir": "/var/lib/docker/overlay2/1e0e83685e550417cddd524d2d8b786a0c193a25b235b1df64d1bc4562ba00b1-init/diff:/var/lib/docker/overlay2/76f783f469ac4c930bc111d7df4bd2b3a57bdcd762971c7ce0ba7a7b959771a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1e0e83685e550417cddd524d2d8b786a0c193a25b235b1df64d1bc4562ba00b1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1e0e83685e550417cddd524d2d8b786a0c193a25b235b1df64d1bc4562ba00b1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1e0e83685e550417cddd524d2d8b786a0c193a25b235b1df64d1bc4562ba00b1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-031066",
	                "Source": "/var/lib/docker/volumes/no-preload-031066/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-031066",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-031066",
	                "name.minikube.sigs.k8s.io": "no-preload-031066",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cf6d3adbad74b83d7f67e9fbb4f0d081850f00a62b7124d9478bf4c4cb90b469",
	            "SandboxKey": "/var/run/docker/netns/cf6d3adbad74",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-031066": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:da:e3:74:b5:7a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "659f168a65764f8b90baada540d0c1e70a7a90e0cd6e43139115c0a2c2f0c906",
	                    "EndpointID": "a928894f00caa7cff351765b3b30caf9e8449171543c306c7c567236d4be4067",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-031066",
	                        "dce899f902ae"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-031066 -n no-preload-031066
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-031066 -n no-preload-031066: exit status 2 (434.283873ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-031066 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-031066 logs -n 25: (1.946149468s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p enable-default-cni-448954 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                  │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ ssh     │ -p enable-default-cni-448954 sudo cat /etc/containerd/config.toml                                                                                                                                                                             │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ ssh     │ -p enable-default-cni-448954 sudo containerd config dump                                                                                                                                                                                      │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ ssh     │ -p enable-default-cni-448954 sudo systemctl status crio --all --full --no-pager                                                                                                                                                               │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ ssh     │ -p enable-default-cni-448954 sudo systemctl cat crio --no-pager                                                                                                                                                                               │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ ssh     │ -p enable-default-cni-448954 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                     │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ ssh     │ -p enable-default-cni-448954 sudo crio config                                                                                                                                                                                                 │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ delete  │ -p enable-default-cni-448954                                                                                                                                                                                                                  │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ delete  │ -p disable-driver-mounts-634520                                                                                                                                                                                                               │ disable-driver-mounts-634520 │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ start   │ -p default-k8s-diff-port-986220 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-986220 │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:17 UTC │
	│ addons  │ enable dashboard -p no-preload-031066 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-031066            │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ start   │ -p no-preload-031066 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-031066            │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:17 UTC │
	│ addons  │ enable metrics-server -p embed-certs-880603 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-880603           │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │                     │
	│ stop    │ -p embed-certs-880603 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-880603           │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:17 UTC │
	│ image   │ old-k8s-version-951975 image list --format=json                                                                                                                                                                                               │ old-k8s-version-951975       │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ pause   │ -p old-k8s-version-951975 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-951975       │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │                     │
	│ delete  │ -p old-k8s-version-951975                                                                                                                                                                                                                     │ old-k8s-version-951975       │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ delete  │ -p old-k8s-version-951975                                                                                                                                                                                                                     │ old-k8s-version-951975       │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ start   │ -p newest-cni-444637 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-444637            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-986220 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-986220 │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-880603 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-880603           │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ start   │ -p embed-certs-880603 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-880603           │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-986220 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-986220 │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │                     │
	│ image   │ no-preload-031066 image list --format=json                                                                                                                                                                                                    │ no-preload-031066            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ pause   │ -p no-preload-031066 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-031066            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 09:17:11
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 09:17:11.127976  318609 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:17:11.128099  318609 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:17:11.128110  318609 out.go:374] Setting ErrFile to fd 2...
	I1018 09:17:11.128116  318609 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:17:11.128407  318609 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-5897/.minikube/bin
	I1018 09:17:11.129108  318609 out.go:368] Setting JSON to false
	I1018 09:17:11.130745  318609 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3579,"bootTime":1760775452,"procs":314,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 09:17:11.130899  318609 start.go:141] virtualization: kvm guest
	I1018 09:17:11.133028  318609 out.go:179] * [embed-certs-880603] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 09:17:11.134976  318609 out.go:179]   - MINIKUBE_LOCATION=21767
	I1018 09:17:11.134978  318609 notify.go:220] Checking for updates...
	I1018 09:17:11.137430  318609 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:17:11.138623  318609 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-5897/kubeconfig
	I1018 09:17:11.139983  318609 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-5897/.minikube
	I1018 09:17:11.141321  318609 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 09:17:11.144577  318609 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:17:11.146504  318609 config.go:182] Loaded profile config "embed-certs-880603": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:17:11.147240  318609 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:17:11.177186  318609 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 09:17:11.177283  318609 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:17:11.246215  318609 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:false NGoroutines:82 SystemTime:2025-10-18 09:17:11.234596914 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:17:11.246390  318609 docker.go:318] overlay module found
	I1018 09:17:11.248080  318609 out.go:179] * Using the docker driver based on existing profile
	W1018 09:17:08.564826  309439 pod_ready.go:104] pod "coredns-66bc5c9577-h44wj" is not "Ready", error: <nil>
	W1018 09:17:11.064301  309439 pod_ready.go:104] pod "coredns-66bc5c9577-h44wj" is not "Ready", error: <nil>
	I1018 09:17:11.249412  318609 start.go:305] selected driver: docker
	I1018 09:17:11.249428  318609 start.go:925] validating driver "docker" against &{Name:embed-certs-880603 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-880603 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:17:11.249526  318609 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:17:11.250068  318609 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:17:11.319441  318609 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:false NGoroutines:82 SystemTime:2025-10-18 09:17:11.308704487 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:17:11.319855  318609 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:17:11.319896  318609 cni.go:84] Creating CNI manager for ""
	I1018 09:17:11.319974  318609 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:17:11.320031  318609 start.go:349] cluster config:
	{Name:embed-certs-880603 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-880603 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:17:11.322409  318609 out.go:179] * Starting "embed-certs-880603" primary control-plane node in "embed-certs-880603" cluster
	I1018 09:17:11.323775  318609 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 09:17:11.325192  318609 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 09:17:11.326350  318609 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:17:11.326403  318609 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-5897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 09:17:11.326420  318609 cache.go:58] Caching tarball of preloaded images
	I1018 09:17:11.326463  318609 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 09:17:11.326535  318609 preload.go:233] Found /home/jenkins/minikube-integration/21767-5897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 09:17:11.326551  318609 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 09:17:11.326691  318609 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/config.json ...
	I1018 09:17:11.351627  318609 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 09:17:11.351655  318609 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 09:17:11.351675  318609 cache.go:232] Successfully downloaded all kic artifacts
	I1018 09:17:11.351700  318609 start.go:360] acquireMachinesLock for embed-certs-880603: {Name:mkdfbdbf4ee52d14237c1c3c1038142062936208 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:17:11.351755  318609 start.go:364] duration metric: took 38.508µs to acquireMachinesLock for "embed-certs-880603"
	I1018 09:17:11.351773  318609 start.go:96] Skipping create...Using existing machine configuration
	I1018 09:17:11.351777  318609 fix.go:54] fixHost starting: 
	I1018 09:17:11.351977  318609 cli_runner.go:164] Run: docker container inspect embed-certs-880603 --format={{.State.Status}}
	I1018 09:17:11.373442  318609 fix.go:112] recreateIfNeeded on embed-certs-880603: state=Stopped err=<nil>
	W1018 09:17:11.373476  318609 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 09:17:11.564720  309439 pod_ready.go:94] pod "coredns-66bc5c9577-h44wj" is "Ready"
	I1018 09:17:11.564765  309439 pod_ready.go:86] duration metric: took 39.506548525s for pod "coredns-66bc5c9577-h44wj" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:11.568031  309439 pod_ready.go:83] waiting for pod "etcd-no-preload-031066" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:11.573550  309439 pod_ready.go:94] pod "etcd-no-preload-031066" is "Ready"
	I1018 09:17:11.573587  309439 pod_ready.go:86] duration metric: took 5.523348ms for pod "etcd-no-preload-031066" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:11.576604  309439 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-031066" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:11.581554  309439 pod_ready.go:94] pod "kube-apiserver-no-preload-031066" is "Ready"
	I1018 09:17:11.581585  309439 pod_ready.go:86] duration metric: took 4.95841ms for pod "kube-apiserver-no-preload-031066" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:11.584069  309439 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-031066" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:11.761986  309439 pod_ready.go:94] pod "kube-controller-manager-no-preload-031066" is "Ready"
	I1018 09:17:11.762015  309439 pod_ready.go:86] duration metric: took 177.918371ms for pod "kube-controller-manager-no-preload-031066" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:11.963955  309439 pod_ready.go:83] waiting for pod "kube-proxy-jr5qn" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:12.362509  309439 pod_ready.go:94] pod "kube-proxy-jr5qn" is "Ready"
	I1018 09:17:12.362535  309439 pod_ready.go:86] duration metric: took 398.550543ms for pod "kube-proxy-jr5qn" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:12.563323  309439 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-031066" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:12.961983  309439 pod_ready.go:94] pod "kube-scheduler-no-preload-031066" is "Ready"
	I1018 09:17:12.962010  309439 pod_ready.go:86] duration metric: took 398.63196ms for pod "kube-scheduler-no-preload-031066" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:12.962024  309439 pod_ready.go:40] duration metric: took 40.908929826s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:17:13.016156  309439 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 09:17:13.198032  309439 out.go:179] * Done! kubectl is now configured to use "no-preload-031066" cluster and "default" namespace by default
	I1018 09:17:09.188634  317552 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1018 09:17:09.188912  317552 start.go:159] libmachine.API.Create for "newest-cni-444637" (driver="docker")
	I1018 09:17:09.188950  317552 client.go:168] LocalClient.Create starting
	I1018 09:17:09.189053  317552 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem
	I1018 09:17:09.189089  317552 main.go:141] libmachine: Decoding PEM data...
	I1018 09:17:09.189109  317552 main.go:141] libmachine: Parsing certificate...
	I1018 09:17:09.189178  317552 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem
	I1018 09:17:09.189213  317552 main.go:141] libmachine: Decoding PEM data...
	I1018 09:17:09.189240  317552 main.go:141] libmachine: Parsing certificate...
	I1018 09:17:09.189750  317552 cli_runner.go:164] Run: docker network inspect newest-cni-444637 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 09:17:09.210123  317552 cli_runner.go:211] docker network inspect newest-cni-444637 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 09:17:09.210206  317552 network_create.go:284] running [docker network inspect newest-cni-444637] to gather additional debugging logs...
	I1018 09:17:09.210231  317552 cli_runner.go:164] Run: docker network inspect newest-cni-444637
	W1018 09:17:09.229627  317552 cli_runner.go:211] docker network inspect newest-cni-444637 returned with exit code 1
	I1018 09:17:09.229664  317552 network_create.go:287] error running [docker network inspect newest-cni-444637]: docker network inspect newest-cni-444637: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-444637 not found
	I1018 09:17:09.229695  317552 network_create.go:289] output of [docker network inspect newest-cni-444637]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-444637 not found
	
	** /stderr **
	I1018 09:17:09.229806  317552 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:17:09.251681  317552 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0a5d0734e8e5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ba:09:81:3f:ef:cf} reservation:<nil>}
	I1018 09:17:09.252531  317552 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-0be1ffd412fe IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3e:00:46:36:7b:65} reservation:<nil>}
	I1018 09:17:09.253598  317552 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-e93e49dbe6fd IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:52:68:21:3c:ba:1e} reservation:<nil>}
	I1018 09:17:09.254456  317552 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-00da72598f1f IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:96:40:50:13:fe:53} reservation:<nil>}
	I1018 09:17:09.256764  317552 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-659f168a6576 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:ae:c5:d4:16:bd:0d} reservation:<nil>}
	I1018 09:17:09.257480  317552 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-ef55982bb9e9 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:b2:96:36:ed:5a:aa} reservation:<nil>}
	I1018 09:17:09.258151  317552 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e5e390}
	I1018 09:17:09.258175  317552 network_create.go:124] attempt to create docker network newest-cni-444637 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1018 09:17:09.258226  317552 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-444637 newest-cni-444637
	I1018 09:17:09.323595  317552 network_create.go:108] docker network newest-cni-444637 192.168.103.0/24 created
	I1018 09:17:09.323635  317552 kic.go:121] calculated static IP "192.168.103.2" for the "newest-cni-444637" container
	I1018 09:17:09.323717  317552 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 09:17:09.344125  317552 cli_runner.go:164] Run: docker volume create newest-cni-444637 --label name.minikube.sigs.k8s.io=newest-cni-444637 --label created_by.minikube.sigs.k8s.io=true
	I1018 09:17:09.365337  317552 oci.go:103] Successfully created a docker volume newest-cni-444637
	I1018 09:17:09.365443  317552 cli_runner.go:164] Run: docker run --rm --name newest-cni-444637-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-444637 --entrypoint /usr/bin/test -v newest-cni-444637:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 09:17:09.784077  317552 oci.go:107] Successfully prepared a docker volume newest-cni-444637
	I1018 09:17:09.784112  317552 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:17:09.784132  317552 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 09:17:09.784201  317552 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-5897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-444637:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1018 09:17:13.695381  317552 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-5897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-444637:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (3.911104105s)
	I1018 09:17:13.695418  317552 kic.go:203] duration metric: took 3.911282105s to extract preloaded images to volume ...
	W1018 09:17:13.695514  317552 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1018 09:17:13.695555  317552 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1018 09:17:13.695606  317552 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 09:17:13.760997  317552 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-444637 --name newest-cni-444637 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-444637 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-444637 --network newest-cni-444637 --ip 192.168.103.2 --volume newest-cni-444637:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 09:17:11.378708  318609 out.go:252] * Restarting existing docker container for "embed-certs-880603" ...
	I1018 09:17:11.378840  318609 cli_runner.go:164] Run: docker start embed-certs-880603
	I1018 09:17:11.670023  318609 cli_runner.go:164] Run: docker container inspect embed-certs-880603 --format={{.State.Status}}
	I1018 09:17:11.698586  318609 kic.go:430] container "embed-certs-880603" state is running.
	I1018 09:17:11.699239  318609 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-880603
	I1018 09:17:11.723193  318609 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/config.json ...
	I1018 09:17:11.723494  318609 machine.go:93] provisionDockerMachine start ...
	I1018 09:17:11.723580  318609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-880603
	I1018 09:17:11.748449  318609 main.go:141] libmachine: Using SSH client type: native
	I1018 09:17:11.748760  318609 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1018 09:17:11.748776  318609 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:17:11.749639  318609 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51858->127.0.0.1:33118: read: connection reset by peer
	I1018 09:17:14.885465  318609 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-880603
	
	I1018 09:17:14.885500  318609 ubuntu.go:182] provisioning hostname "embed-certs-880603"
	I1018 09:17:14.885560  318609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-880603
	I1018 09:17:14.906168  318609 main.go:141] libmachine: Using SSH client type: native
	I1018 09:17:14.906453  318609 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1018 09:17:14.906475  318609 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-880603 && echo "embed-certs-880603" | sudo tee /etc/hostname
	I1018 09:17:15.051297  318609 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-880603
	
	I1018 09:17:15.051402  318609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-880603
	I1018 09:17:15.070949  318609 main.go:141] libmachine: Using SSH client type: native
	I1018 09:17:15.071273  318609 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1018 09:17:15.071301  318609 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-880603' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-880603/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-880603' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:17:15.208294  318609 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:17:15.208331  318609 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-5897/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-5897/.minikube}
	I1018 09:17:15.208368  318609 ubuntu.go:190] setting up certificates
	I1018 09:17:15.208390  318609 provision.go:84] configureAuth start
	I1018 09:17:15.208447  318609 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-880603
	I1018 09:17:15.229045  318609 provision.go:143] copyHostCerts
	I1018 09:17:15.229113  318609 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem, removing ...
	I1018 09:17:15.229121  318609 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem
	I1018 09:17:15.229173  318609 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem (1078 bytes)
	I1018 09:17:15.229276  318609 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem, removing ...
	I1018 09:17:15.229285  318609 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem
	I1018 09:17:15.229307  318609 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem (1123 bytes)
	I1018 09:17:15.229417  318609 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem, removing ...
	I1018 09:17:15.229428  318609 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem
	I1018 09:17:15.229452  318609 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem (1675 bytes)
	I1018 09:17:15.229526  318609 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem org=jenkins.embed-certs-880603 san=[127.0.0.1 192.168.76.2 embed-certs-880603 localhost minikube]
	I1018 09:17:15.433412  318609 provision.go:177] copyRemoteCerts
	I1018 09:17:15.433470  318609 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:17:15.433514  318609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-880603
	I1018 09:17:15.453413  318609 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/embed-certs-880603/id_rsa Username:docker}
	I1018 09:17:15.550904  318609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 09:17:15.572110  318609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:17:15.594541  318609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1018 09:17:15.615376  318609 provision.go:87] duration metric: took 406.940904ms to configureAuth
	I1018 09:17:15.615413  318609 ubuntu.go:206] setting minikube options for container-runtime
	I1018 09:17:15.615610  318609 config.go:182] Loaded profile config "embed-certs-880603": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:17:15.615717  318609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-880603
	I1018 09:17:15.636877  318609 main.go:141] libmachine: Using SSH client type: native
	I1018 09:17:15.637184  318609 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1018 09:17:15.637207  318609 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:17:15.943641  318609 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:17:15.943667  318609 machine.go:96] duration metric: took 4.22015227s to provisionDockerMachine
	I1018 09:17:15.943681  318609 start.go:293] postStartSetup for "embed-certs-880603" (driver="docker")
	I1018 09:17:15.943696  318609 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:17:15.943791  318609 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:17:15.943861  318609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-880603
	I1018 09:17:15.965089  318609 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/embed-certs-880603/id_rsa Username:docker}
	I1018 09:17:16.066941  318609 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:17:16.070914  318609 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 09:17:16.070950  318609 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 09:17:16.070964  318609 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-5897/.minikube/addons for local assets ...
	I1018 09:17:16.071035  318609 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-5897/.minikube/files for local assets ...
	I1018 09:17:16.071137  318609 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem -> 93942.pem in /etc/ssl/certs
	I1018 09:17:16.071255  318609 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:17:16.079740  318609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem --> /etc/ssl/certs/93942.pem (1708 bytes)
	I1018 09:17:16.099026  318609 start.go:296] duration metric: took 155.33037ms for postStartSetup
	I1018 09:17:16.099095  318609 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:17:16.099128  318609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-880603
	I1018 09:17:16.118386  318609 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/embed-certs-880603/id_rsa Username:docker}
	I1018 09:17:14.049877  317552 cli_runner.go:164] Run: docker container inspect newest-cni-444637 --format={{.State.Running}}
	I1018 09:17:14.068863  317552 cli_runner.go:164] Run: docker container inspect newest-cni-444637 --format={{.State.Status}}
	I1018 09:17:14.089197  317552 cli_runner.go:164] Run: docker exec newest-cni-444637 stat /var/lib/dpkg/alternatives/iptables
	I1018 09:17:14.138076  317552 oci.go:144] the created container "newest-cni-444637" has a running status.
	I1018 09:17:14.138118  317552 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21767-5897/.minikube/machines/newest-cni-444637/id_rsa...
	I1018 09:17:14.384996  317552 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21767-5897/.minikube/machines/newest-cni-444637/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 09:17:14.417071  317552 cli_runner.go:164] Run: docker container inspect newest-cni-444637 --format={{.State.Status}}
	I1018 09:17:14.439071  317552 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 09:17:14.439092  317552 kic_runner.go:114] Args: [docker exec --privileged newest-cni-444637 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 09:17:14.481997  317552 cli_runner.go:164] Run: docker container inspect newest-cni-444637 --format={{.State.Status}}
	I1018 09:17:14.501774  317552 machine.go:93] provisionDockerMachine start ...
	I1018 09:17:14.501869  317552 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:17:14.521222  317552 main.go:141] libmachine: Using SSH client type: native
	I1018 09:17:14.521511  317552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1018 09:17:14.521537  317552 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:17:14.660334  317552 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-444637
	
	I1018 09:17:14.660392  317552 ubuntu.go:182] provisioning hostname "newest-cni-444637"
	I1018 09:17:14.660496  317552 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:17:14.681745  317552 main.go:141] libmachine: Using SSH client type: native
	I1018 09:17:14.682027  317552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1018 09:17:14.682043  317552 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-444637 && echo "newest-cni-444637" | sudo tee /etc/hostname
	I1018 09:17:14.829812  317552 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-444637
	
	I1018 09:17:14.829885  317552 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:17:14.849631  317552 main.go:141] libmachine: Using SSH client type: native
	I1018 09:17:14.849918  317552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1018 09:17:14.849965  317552 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-444637' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-444637/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-444637' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:17:14.988236  317552 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:17:14.988268  317552 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-5897/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-5897/.minikube}
	I1018 09:17:14.988294  317552 ubuntu.go:190] setting up certificates
	I1018 09:17:14.988307  317552 provision.go:84] configureAuth start
	I1018 09:17:14.988404  317552 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-444637
	I1018 09:17:15.008288  317552 provision.go:143] copyHostCerts
	I1018 09:17:15.008375  317552 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem, removing ...
	I1018 09:17:15.008390  317552 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem
	I1018 09:17:15.008475  317552 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem (1123 bytes)
	I1018 09:17:15.008598  317552 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem, removing ...
	I1018 09:17:15.008612  317552 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem
	I1018 09:17:15.008653  317552 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem (1675 bytes)
	I1018 09:17:15.008748  317552 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem, removing ...
	I1018 09:17:15.008758  317552 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem
	I1018 09:17:15.008795  317552 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem (1078 bytes)
	I1018 09:17:15.008866  317552 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem org=jenkins.newest-cni-444637 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-444637]
	I1018 09:17:15.174291  317552 provision.go:177] copyRemoteCerts
	I1018 09:17:15.174369  317552 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:17:15.174408  317552 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:17:15.193022  317552 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/newest-cni-444637/id_rsa Username:docker}
	I1018 09:17:15.291082  317552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:17:15.312380  317552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 09:17:15.331424  317552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 09:17:15.350924  317552 provision.go:87] duration metric: took 362.601029ms to configureAuth
	I1018 09:17:15.350948  317552 ubuntu.go:206] setting minikube options for container-runtime
	I1018 09:17:15.351135  317552 config.go:182] Loaded profile config "newest-cni-444637": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:17:15.351271  317552 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:17:15.371047  317552 main.go:141] libmachine: Using SSH client type: native
	I1018 09:17:15.371245  317552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1018 09:17:15.371260  317552 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:17:15.634019  317552 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:17:15.634040  317552 machine.go:96] duration metric: took 1.13224377s to provisionDockerMachine
	I1018 09:17:15.634050  317552 client.go:171] duration metric: took 6.445094232s to LocalClient.Create
	I1018 09:17:15.634066  317552 start.go:167] duration metric: took 6.445158625s to libmachine.API.Create "newest-cni-444637"
	I1018 09:17:15.634073  317552 start.go:293] postStartSetup for "newest-cni-444637" (driver="docker")
	I1018 09:17:15.634084  317552 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:17:15.634154  317552 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:17:15.634198  317552 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:17:15.654294  317552 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/newest-cni-444637/id_rsa Username:docker}
	I1018 09:17:15.754992  317552 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:17:15.759105  317552 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 09:17:15.759136  317552 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 09:17:15.759147  317552 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-5897/.minikube/addons for local assets ...
	I1018 09:17:15.759197  317552 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-5897/.minikube/files for local assets ...
	I1018 09:17:15.759264  317552 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem -> 93942.pem in /etc/ssl/certs
	I1018 09:17:15.759365  317552 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:17:15.767631  317552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem --> /etc/ssl/certs/93942.pem (1708 bytes)
	I1018 09:17:15.788461  317552 start.go:296] duration metric: took 154.373371ms for postStartSetup
	I1018 09:17:15.788843  317552 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-444637
	I1018 09:17:15.808952  317552 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/config.json ...
	I1018 09:17:15.809233  317552 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:17:15.809284  317552 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:17:15.829557  317552 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/newest-cni-444637/id_rsa Username:docker}
	I1018 09:17:15.925534  317552 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 09:17:15.931146  317552 start.go:128] duration metric: took 6.745390865s to createHost
	I1018 09:17:15.931177  317552 start.go:83] releasing machines lock for "newest-cni-444637", held for 6.745549753s
	I1018 09:17:15.931265  317552 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-444637
	I1018 09:17:15.954646  317552 ssh_runner.go:195] Run: cat /version.json
	I1018 09:17:15.954660  317552 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:17:15.954708  317552 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:17:15.954739  317552 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:17:15.975594  317552 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/newest-cni-444637/id_rsa Username:docker}
	I1018 09:17:15.976729  317552 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/newest-cni-444637/id_rsa Username:docker}
	I1018 09:17:16.129585  317552 ssh_runner.go:195] Run: systemctl --version
	I1018 09:17:16.136952  317552 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:17:16.174231  317552 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:17:16.179303  317552 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:17:16.179431  317552 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:17:16.206829  317552 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1018 09:17:16.206856  317552 start.go:495] detecting cgroup driver to use...
	I1018 09:17:16.206891  317552 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 09:17:16.206942  317552 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:17:16.227249  317552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:17:16.243123  317552 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:17:16.243168  317552 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:17:16.264732  317552 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:17:16.286505  317552 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:17:16.376744  317552 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:17:16.473683  317552 docker.go:234] disabling docker service ...
	I1018 09:17:16.473752  317552 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:17:16.494405  317552 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:17:16.507891  317552 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:17:16.606126  317552 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:17:16.712833  317552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:17:16.726314  317552 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:17:16.742596  317552 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 09:17:16.742654  317552 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:17:16.754056  317552 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 09:17:16.754114  317552 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:17:16.763974  317552 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:17:16.773872  317552 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:17:16.783474  317552 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:17:16.795229  317552 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:17:16.808833  317552 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:17:16.823900  317552 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:17:16.833523  317552 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:17:16.841872  317552 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:17:16.850852  317552 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:17:16.959224  317552 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:17:17.071119  317552 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:17:17.071178  317552 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:17:17.075205  317552 start.go:563] Will wait 60s for crictl version
	I1018 09:17:17.075261  317552 ssh_runner.go:195] Run: which crictl
	I1018 09:17:17.079167  317552 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 09:17:17.104625  317552 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 09:17:17.104711  317552 ssh_runner.go:195] Run: crio --version
	I1018 09:17:17.141984  317552 ssh_runner.go:195] Run: crio --version
	I1018 09:17:17.174497  317552 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 09:17:17.175770  317552 cli_runner.go:164] Run: docker network inspect newest-cni-444637 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:17:17.194457  317552 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1018 09:17:17.198997  317552 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:17:17.211964  317552 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1018 09:17:16.213339  318609 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 09:17:16.218229  318609 fix.go:56] duration metric: took 4.866428231s for fixHost
	I1018 09:17:16.218262  318609 start.go:83] releasing machines lock for "embed-certs-880603", held for 4.866495396s
	I1018 09:17:16.218335  318609 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-880603
	I1018 09:17:16.238994  318609 ssh_runner.go:195] Run: cat /version.json
	I1018 09:17:16.239037  318609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-880603
	I1018 09:17:16.239082  318609 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:17:16.239169  318609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-880603
	I1018 09:17:16.260585  318609 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/embed-certs-880603/id_rsa Username:docker}
	I1018 09:17:16.260645  318609 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/embed-certs-880603/id_rsa Username:docker}
	I1018 09:17:16.354778  318609 ssh_runner.go:195] Run: systemctl --version
	I1018 09:17:16.419640  318609 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:17:16.459295  318609 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:17:16.464305  318609 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:17:16.464384  318609 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:17:16.473431  318609 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 09:17:16.473460  318609 start.go:495] detecting cgroup driver to use...
	I1018 09:17:16.473491  318609 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 09:17:16.473540  318609 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:17:16.490131  318609 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:17:16.504274  318609 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:17:16.504360  318609 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:17:16.520533  318609 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:17:16.534510  318609 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:17:16.631395  318609 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:17:16.735201  318609 docker.go:234] disabling docker service ...
	I1018 09:17:16.735271  318609 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:17:16.751373  318609 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:17:16.765218  318609 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:17:16.853436  318609 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:17:16.959246  318609 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:17:16.973578  318609 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:17:16.989668  318609 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 09:17:16.989731  318609 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:17:16.999649  318609 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 09:17:16.999706  318609 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:17:17.010154  318609 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:17:17.020863  318609 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:17:17.030944  318609 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:17:17.039860  318609 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:17:17.050630  318609 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:17:17.060600  318609 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:17:17.070777  318609 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:17:17.078973  318609 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:17:17.087447  318609 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:17:17.174028  318609 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:17:17.298973  318609 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:17:17.299033  318609 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:17:17.303782  318609 start.go:563] Will wait 60s for crictl version
	I1018 09:17:17.303848  318609 ssh_runner.go:195] Run: which crictl
	I1018 09:17:17.308605  318609 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 09:17:17.335260  318609 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 09:17:17.335370  318609 ssh_runner.go:195] Run: crio --version
	I1018 09:17:17.367083  318609 ssh_runner.go:195] Run: crio --version
	I1018 09:17:17.398816  318609 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 09:17:17.400126  318609 cli_runner.go:164] Run: docker network inspect embed-certs-880603 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:17:17.418651  318609 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1018 09:17:17.423444  318609 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:17:17.435876  318609 kubeadm.go:883] updating cluster {Name:embed-certs-880603 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-880603 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:17:17.436015  318609 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:17:17.436084  318609 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:17:17.480327  318609 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:17:17.480374  318609 crio.go:433] Images already preloaded, skipping extraction
	I1018 09:17:17.480431  318609 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:17:17.509539  318609 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:17:17.509563  318609 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:17:17.509572  318609 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1018 09:17:17.509683  318609 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-880603 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-880603 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:17:17.509770  318609 ssh_runner.go:195] Run: crio config
	I1018 09:17:17.568907  318609 cni.go:84] Creating CNI manager for ""
	I1018 09:17:17.568934  318609 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:17:17.568950  318609 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 09:17:17.568979  318609 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-880603 NodeName:embed-certs-880603 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:17:17.569156  318609 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-880603"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:17:17.569233  318609 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 09:17:17.580848  318609 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:17:17.580916  318609 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:17:17.592129  318609 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1018 09:17:17.610873  318609 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:17:17.626475  318609 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1018 09:17:17.641313  318609 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1018 09:17:17.646150  318609 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:17:17.659093  318609 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:17:17.749922  318609 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:17:17.787276  318609 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603 for IP: 192.168.76.2
	I1018 09:17:17.787298  318609 certs.go:195] generating shared ca certs ...
	I1018 09:17:17.787323  318609 certs.go:227] acquiring lock for ca certs: {Name:mk550b60d986fbbdf7b5e0015c56234b739f3162 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:17:17.787513  318609 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-5897/.minikube/ca.key
	I1018 09:17:17.787588  318609 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.key
	I1018 09:17:17.787607  318609 certs.go:257] generating profile certs ...
	I1018 09:17:17.787710  318609 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/client.key
	I1018 09:17:17.787792  318609 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/apiserver.key.d64b1fe7
	I1018 09:17:17.787846  318609 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/proxy-client.key
	I1018 09:17:17.787977  318609 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/9394.pem (1338 bytes)
	W1018 09:17:17.788018  318609 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-5897/.minikube/certs/9394_empty.pem, impossibly tiny 0 bytes
	I1018 09:17:17.788033  318609 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 09:17:17.788072  318609 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:17:17.788104  318609 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:17:17.788136  318609 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem (1675 bytes)
	I1018 09:17:17.788191  318609 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem (1708 bytes)
	I1018 09:17:17.788892  318609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:17:17.809449  318609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 09:17:17.829942  318609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:17:17.851741  318609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 09:17:17.878882  318609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1018 09:17:17.899906  318609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 09:17:17.918672  318609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:17:17.936999  318609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/embed-certs-880603/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 09:17:17.955320  318609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem --> /usr/share/ca-certificates/93942.pem (1708 bytes)
	I1018 09:17:17.973933  318609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:17:17.993795  318609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/certs/9394.pem --> /usr/share/ca-certificates/9394.pem (1338 bytes)
	I1018 09:17:18.012718  318609 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:17:18.025852  318609 ssh_runner.go:195] Run: openssl version
	I1018 09:17:18.032228  318609 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93942.pem && ln -fs /usr/share/ca-certificates/93942.pem /etc/ssl/certs/93942.pem"
	I1018 09:17:18.041496  318609 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93942.pem
	I1018 09:17:18.045449  318609 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 08:35 /usr/share/ca-certificates/93942.pem
	I1018 09:17:18.045515  318609 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93942.pem
	I1018 09:17:18.081598  318609 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93942.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:17:18.090474  318609 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:17:18.099867  318609 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:17:18.103919  318609 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:17:18.103984  318609 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:17:18.138853  318609 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:17:18.148176  318609 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9394.pem && ln -fs /usr/share/ca-certificates/9394.pem /etc/ssl/certs/9394.pem"
	I1018 09:17:18.157408  318609 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9394.pem
	I1018 09:17:18.161569  318609 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 08:35 /usr/share/ca-certificates/9394.pem
	I1018 09:17:18.161622  318609 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9394.pem
	I1018 09:17:18.196252  318609 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9394.pem /etc/ssl/certs/51391683.0"
	I1018 09:17:18.205491  318609 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:17:18.210221  318609 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 09:17:18.246002  318609 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 09:17:18.282394  318609 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 09:17:18.329169  318609 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 09:17:18.384244  318609 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 09:17:18.436808  318609 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 09:17:18.485585  318609 kubeadm.go:400] StartCluster: {Name:embed-certs-880603 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-880603 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:17:18.485661  318609 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:17:18.485721  318609 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:17:18.518857  318609 cri.go:89] found id: "bb56d20e298366d034e8ab343121b80816150a01f06f5e3dcf23656917831fa8"
	I1018 09:17:18.518882  318609 cri.go:89] found id: "299c2f3530014f0f00b7697904d0bbe7e76037825ca7356cddf1e7e8e4cfbd3f"
	I1018 09:17:18.518888  318609 cri.go:89] found id: "0e0ff398f2a3fe03603d026ff3c4c4aa2a99cc70201bf2049eaef07838ab4ad9"
	I1018 09:17:18.518893  318609 cri.go:89] found id: "ec50948aa740993206f6c6b998952e98a6c0c34a9993daeba8381c7072181c67"
	I1018 09:17:18.518905  318609 cri.go:89] found id: ""
	I1018 09:17:18.518953  318609 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 09:17:18.531967  318609 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:17:18Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:17:18.532037  318609 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:17:18.540997  318609 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 09:17:18.541023  318609 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 09:17:18.541066  318609 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 09:17:18.549504  318609 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 09:17:18.550144  318609 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-880603" does not appear in /home/jenkins/minikube-integration/21767-5897/kubeconfig
	I1018 09:17:18.550593  318609 kubeconfig.go:62] /home/jenkins/minikube-integration/21767-5897/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-880603" cluster setting kubeconfig missing "embed-certs-880603" context setting]
	I1018 09:17:18.551186  318609 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/kubeconfig: {Name:mkbdf22e0f6c3f9f36e4cde3352b620d43ace448 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:17:18.552926  318609 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 09:17:18.561741  318609 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1018 09:17:18.561778  318609 kubeadm.go:601] duration metric: took 20.749106ms to restartPrimaryControlPlane
	I1018 09:17:18.561788  318609 kubeadm.go:402] duration metric: took 76.208434ms to StartCluster
	I1018 09:17:18.561807  318609 settings.go:142] acquiring lock: {Name:mk177870d6cf7000f95346d8b9c104ade730278a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:17:18.561870  318609 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-5897/kubeconfig
	I1018 09:17:18.563795  318609 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/kubeconfig: {Name:mkbdf22e0f6c3f9f36e4cde3352b620d43ace448 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:17:18.564071  318609 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:17:18.564358  318609 config.go:182] Loaded profile config "embed-certs-880603": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:17:18.564251  318609 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 09:17:18.564475  318609 addons.go:69] Setting dashboard=true in profile "embed-certs-880603"
	I1018 09:17:18.564526  318609 addons.go:238] Setting addon dashboard=true in "embed-certs-880603"
	W1018 09:17:18.564537  318609 addons.go:247] addon dashboard should already be in state true
	I1018 09:17:18.564551  318609 addons.go:69] Setting default-storageclass=true in profile "embed-certs-880603"
	I1018 09:17:18.564565  318609 host.go:66] Checking if "embed-certs-880603" exists ...
	I1018 09:17:18.564566  318609 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-880603"
	I1018 09:17:18.564893  318609 cli_runner.go:164] Run: docker container inspect embed-certs-880603 --format={{.State.Status}}
	I1018 09:17:18.565053  318609 cli_runner.go:164] Run: docker container inspect embed-certs-880603 --format={{.State.Status}}
	I1018 09:17:18.565061  318609 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-880603"
	I1018 09:17:18.565078  318609 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-880603"
	W1018 09:17:18.565089  318609 addons.go:247] addon storage-provisioner should already be in state true
	I1018 09:17:18.565114  318609 host.go:66] Checking if "embed-certs-880603" exists ...
	I1018 09:17:18.565552  318609 cli_runner.go:164] Run: docker container inspect embed-certs-880603 --format={{.State.Status}}
	I1018 09:17:18.570753  318609 out.go:179] * Verifying Kubernetes components...
	I1018 09:17:18.572233  318609 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:17:18.595478  318609 addons.go:238] Setting addon default-storageclass=true in "embed-certs-880603"
	W1018 09:17:18.595512  318609 addons.go:247] addon default-storageclass should already be in state true
	I1018 09:17:18.595544  318609 host.go:66] Checking if "embed-certs-880603" exists ...
	I1018 09:17:18.596010  318609 cli_runner.go:164] Run: docker container inspect embed-certs-880603 --format={{.State.Status}}
	I1018 09:17:18.596788  318609 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:17:18.596843  318609 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1018 09:17:18.598011  318609 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:17:18.598029  318609 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 09:17:18.598041  318609 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1018 09:17:17.213422  317552 kubeadm.go:883] updating cluster {Name:newest-cni-444637 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-444637 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:17:17.213533  317552 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:17:17.213594  317552 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:17:17.251922  317552 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:17:17.251950  317552 crio.go:433] Images already preloaded, skipping extraction
	I1018 09:17:17.252007  317552 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:17:17.280998  317552 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:17:17.281022  317552 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:17:17.281030  317552 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1018 09:17:17.281117  317552 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-444637 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-444637 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:17:17.281181  317552 ssh_runner.go:195] Run: crio config
	I1018 09:17:17.332549  317552 cni.go:84] Creating CNI manager for ""
	I1018 09:17:17.332577  317552 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:17:17.332599  317552 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1018 09:17:17.332636  317552 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-444637 NodeName:newest-cni-444637 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:17:17.332793  317552 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-444637"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:17:17.332868  317552 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 09:17:17.341998  317552 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:17:17.342065  317552 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:17:17.350911  317552 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1018 09:17:17.365559  317552 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:17:17.383555  317552 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1018 09:17:17.398412  317552 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1018 09:17:17.402524  317552 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:17:17.413781  317552 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:17:17.518372  317552 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:17:17.541710  317552 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637 for IP: 192.168.103.2
	I1018 09:17:17.541739  317552 certs.go:195] generating shared ca certs ...
	I1018 09:17:17.541761  317552 certs.go:227] acquiring lock for ca certs: {Name:mk550b60d986fbbdf7b5e0015c56234b739f3162 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:17:17.541913  317552 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-5897/.minikube/ca.key
	I1018 09:17:17.541972  317552 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.key
	I1018 09:17:17.541983  317552 certs.go:257] generating profile certs ...
	I1018 09:17:17.542051  317552 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/client.key
	I1018 09:17:17.542072  317552 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/client.crt with IP's: []
	I1018 09:17:17.900255  317552 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/client.crt ...
	I1018 09:17:17.900288  317552 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/client.crt: {Name:mk398aef82068c0d4eae9a6820268134c19ea579 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:17:17.900638  317552 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/client.key ...
	I1018 09:17:17.900665  317552 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/client.key: {Name:mk82a83e15c2a917159fcd38122480d30555b2da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:17:17.900816  317552 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/apiserver.key.d9d366ba
	I1018 09:17:17.900834  317552 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/apiserver.crt.d9d366ba with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1018 09:17:19.193810  317552 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/apiserver.crt.d9d366ba ...
	I1018 09:17:19.193860  317552 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/apiserver.crt.d9d366ba: {Name:mk430c643f46b1da6aefd8860203413863958272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:17:19.194087  317552 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/apiserver.key.d9d366ba ...
	I1018 09:17:19.194104  317552 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/apiserver.key.d9d366ba: {Name:mkaa621bf58b27e56caba0ac7c09d1a67e27edb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:17:19.194197  317552 certs.go:382] copying /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/apiserver.crt.d9d366ba -> /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/apiserver.crt
	I1018 09:17:19.194291  317552 certs.go:386] copying /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/apiserver.key.d9d366ba -> /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/apiserver.key
	I1018 09:17:19.194386  317552 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/proxy-client.key
	I1018 09:17:19.194418  317552 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/proxy-client.crt with IP's: []
	I1018 09:17:19.640155  317552 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/proxy-client.crt ...
	I1018 09:17:19.640190  317552 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/proxy-client.crt: {Name:mk37c4949bb4f29e1aca8ce57ba765fc27e96b9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:17:19.640396  317552 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/proxy-client.key ...
	I1018 09:17:19.640414  317552 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/proxy-client.key: {Name:mkcf69ff91965a3ff895a29939017f5a16268130 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:17:19.640645  317552 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/9394.pem (1338 bytes)
	W1018 09:17:19.640694  317552 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-5897/.minikube/certs/9394_empty.pem, impossibly tiny 0 bytes
	I1018 09:17:19.640709  317552 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 09:17:19.640738  317552 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:17:19.640801  317552 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:17:19.640831  317552 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem (1675 bytes)
	I1018 09:17:19.640884  317552 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem (1708 bytes)
	I1018 09:17:19.641603  317552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:17:19.664788  317552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 09:17:19.687273  317552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:17:19.709412  317552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 09:17:19.733291  317552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1018 09:17:19.753506  317552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 09:17:19.774582  317552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:17:19.794195  317552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 09:17:19.819204  317552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem --> /usr/share/ca-certificates/93942.pem (1708 bytes)
	I1018 09:17:19.842235  317552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:17:19.864300  317552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/certs/9394.pem --> /usr/share/ca-certificates/9394.pem (1338 bytes)
	I1018 09:17:19.884089  317552 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:17:19.897889  317552 ssh_runner.go:195] Run: openssl version
	I1018 09:17:19.905230  317552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93942.pem && ln -fs /usr/share/ca-certificates/93942.pem /etc/ssl/certs/93942.pem"
	I1018 09:17:19.915057  317552 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93942.pem
	I1018 09:17:19.919378  317552 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 08:35 /usr/share/ca-certificates/93942.pem
	I1018 09:17:19.919469  317552 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93942.pem
	I1018 09:17:19.962112  317552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93942.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:17:19.971611  317552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:17:19.980835  317552 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:17:19.984939  317552 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:17:19.984998  317552 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:17:20.025091  317552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:17:20.036179  317552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9394.pem && ln -fs /usr/share/ca-certificates/9394.pem /etc/ssl/certs/9394.pem"
	I1018 09:17:20.045384  317552 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9394.pem
	I1018 09:17:20.049425  317552 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 08:35 /usr/share/ca-certificates/9394.pem
	I1018 09:17:20.049531  317552 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9394.pem
	I1018 09:17:20.098711  317552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9394.pem /etc/ssl/certs/51391683.0"
	I1018 09:17:20.112962  317552 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:17:20.118975  317552 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 09:17:20.119046  317552 kubeadm.go:400] StartCluster: {Name:newest-cni-444637 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-444637 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disable
Metrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:17:20.119145  317552 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:17:20.119201  317552 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:17:20.152793  317552 cri.go:89] found id: ""
	I1018 09:17:20.152871  317552 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:17:20.161739  317552 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 09:17:20.173773  317552 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 09:17:20.173869  317552 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 09:17:20.194045  317552 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 09:17:20.194132  317552 kubeadm.go:157] found existing configuration files:
	
	I1018 09:17:20.194209  317552 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 09:17:20.209951  317552 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 09:17:20.210019  317552 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 09:17:20.225578  317552 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 09:17:20.237533  317552 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 09:17:20.237617  317552 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 09:17:20.252853  317552 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 09:17:20.263573  317552 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 09:17:20.263640  317552 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 09:17:20.274725  317552 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 09:17:20.285466  317552 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 09:17:20.285540  317552 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 09:17:20.294669  317552 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 09:17:20.344138  317552 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 09:17:20.344278  317552 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 09:17:20.370237  317552 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 09:17:20.370322  317552 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1018 09:17:20.370395  317552 kubeadm.go:318] OS: Linux
	I1018 09:17:20.370461  317552 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 09:17:20.370548  317552 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 09:17:20.370643  317552 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 09:17:20.370738  317552 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 09:17:20.370821  317552 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 09:17:20.370882  317552 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 09:17:20.370956  317552 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 09:17:20.371013  317552 kubeadm.go:318] CGROUPS_IO: enabled
	I1018 09:17:20.456405  317552 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 09:17:20.456571  317552 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 09:17:20.456717  317552 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 09:17:20.466719  317552 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 09:17:18.598085  318609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-880603
	I1018 09:17:18.602272  318609 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1018 09:17:18.602297  318609 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1018 09:17:18.602391  318609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-880603
	I1018 09:17:18.625501  318609 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 09:17:18.625530  318609 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 09:17:18.625594  318609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-880603
	I1018 09:17:18.634214  318609 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/embed-certs-880603/id_rsa Username:docker}
	I1018 09:17:18.635708  318609 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/embed-certs-880603/id_rsa Username:docker}
	I1018 09:17:18.665410  318609 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/embed-certs-880603/id_rsa Username:docker}
	I1018 09:17:18.759504  318609 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:17:18.761338  318609 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:17:18.763410  318609 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1018 09:17:18.763497  318609 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1018 09:17:18.782337  318609 node_ready.go:35] waiting up to 6m0s for node "embed-certs-880603" to be "Ready" ...
	I1018 09:17:18.784570  318609 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 09:17:18.784675  318609 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1018 09:17:18.784688  318609 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1018 09:17:18.809225  318609 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1018 09:17:18.809253  318609 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1018 09:17:18.831839  318609 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1018 09:17:18.831866  318609 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1018 09:17:18.850982  318609 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1018 09:17:18.851009  318609 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1018 09:17:18.867513  318609 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1018 09:17:18.867541  318609 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1018 09:17:18.882118  318609 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1018 09:17:18.882188  318609 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1018 09:17:18.900023  318609 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1018 09:17:18.900047  318609 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1018 09:17:18.914260  318609 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 09:17:18.914286  318609 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1018 09:17:18.929639  318609 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 09:17:20.209382  318609 node_ready.go:49] node "embed-certs-880603" is "Ready"
	I1018 09:17:20.209445  318609 node_ready.go:38] duration metric: took 1.426935211s for node "embed-certs-880603" to be "Ready" ...
	I1018 09:17:20.209464  318609 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:17:20.209543  318609 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:17:20.783351  318609 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.021954948s)
	I1018 09:17:20.783413  318609 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.99880947s)
	I1018 09:17:20.783566  318609 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.853881663s)
	I1018 09:17:20.783588  318609 api_server.go:72] duration metric: took 2.219492217s to wait for apiserver process to appear ...
	I1018 09:17:20.783601  318609 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:17:20.783623  318609 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 09:17:20.785443  318609 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-880603 addons enable metrics-server
	
	I1018 09:17:20.791245  318609 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 09:17:20.791287  318609 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 09:17:20.797568  318609 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1018 09:17:20.798698  318609 addons.go:514] duration metric: took 2.234454777s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1018 09:17:20.469473  317552 out.go:252]   - Generating certificates and keys ...
	I1018 09:17:20.469581  317552 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 09:17:20.469700  317552 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 09:17:21.798331  317552 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 09:17:21.900447  317552 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 09:17:22.359256  317552 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 09:17:22.555795  317552 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 09:17:22.680524  317552 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 09:17:22.680683  317552 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-444637] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1018 09:17:22.956534  317552 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 09:17:22.956711  317552 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-444637] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1018 09:17:23.219297  317552 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 09:17:23.396104  317552 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 09:17:23.578682  317552 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 09:17:23.578772  317552 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 09:17:23.805634  317552 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 09:17:23.893295  317552 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 09:17:23.991132  317552 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 09:17:24.115587  317552 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 09:17:24.348605  317552 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 09:17:24.349467  317552 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 09:17:24.354280  317552 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 09:17:21.284495  318609 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 09:17:21.290487  318609 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 09:17:21.290528  318609 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 09:17:21.784127  318609 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 09:17:21.789515  318609 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1018 09:17:21.790552  318609 api_server.go:141] control plane version: v1.34.1
	I1018 09:17:21.790581  318609 api_server.go:131] duration metric: took 1.006970761s to wait for apiserver health ...
	I1018 09:17:21.790590  318609 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:17:21.794729  318609 system_pods.go:59] 8 kube-system pods found
	I1018 09:17:21.794777  318609 system_pods.go:61] "coredns-66bc5c9577-7fnw7" [04bb2d33-29f9-45e9-a6b1-e2b770651c0f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:17:21.794792  318609 system_pods.go:61] "etcd-embed-certs-880603" [da7643b6-9066-4e2f-99eb-c2e6d085f539] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 09:17:21.794803  318609 system_pods.go:61] "kindnet-wzdm5" [20629c75-ca93-46db-875e-49d67c7b3f06] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1018 09:17:21.794816  318609 system_pods.go:61] "kube-apiserver-embed-certs-880603" [1e4ec0ef-dc43-4939-a733-02690b04d19b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 09:17:21.794829  318609 system_pods.go:61] "kube-controller-manager-embed-certs-880603" [ccc2c9f0-0b2c-46e5-bea8-c82e8e3124ec] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 09:17:21.794838  318609 system_pods.go:61] "kube-proxy-k4kcs" [83d1821f-468a-4bf0-8fc0-e40e0668f6ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1018 09:17:21.794849  318609 system_pods.go:61] "kube-scheduler-embed-certs-880603" [5635b7e1-dca9-4a7e-8b9c-aa96067fd707] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 09:17:21.794859  318609 system_pods.go:61] "storage-provisioner" [d2aa7a09-3332-4744-9180-d307b4fc8194] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:17:21.794866  318609 system_pods.go:74] duration metric: took 4.269529ms to wait for pod list to return data ...
	I1018 09:17:21.794881  318609 default_sa.go:34] waiting for default service account to be created ...
	I1018 09:17:21.797573  318609 default_sa.go:45] found service account: "default"
	I1018 09:17:21.797593  318609 default_sa.go:55] duration metric: took 2.706862ms for default service account to be created ...
	I1018 09:17:21.797602  318609 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 09:17:21.800602  318609 system_pods.go:86] 8 kube-system pods found
	I1018 09:17:21.800638  318609 system_pods.go:89] "coredns-66bc5c9577-7fnw7" [04bb2d33-29f9-45e9-a6b1-e2b770651c0f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:17:21.800650  318609 system_pods.go:89] "etcd-embed-certs-880603" [da7643b6-9066-4e2f-99eb-c2e6d085f539] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 09:17:21.800661  318609 system_pods.go:89] "kindnet-wzdm5" [20629c75-ca93-46db-875e-49d67c7b3f06] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1018 09:17:21.800675  318609 system_pods.go:89] "kube-apiserver-embed-certs-880603" [1e4ec0ef-dc43-4939-a733-02690b04d19b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 09:17:21.800684  318609 system_pods.go:89] "kube-controller-manager-embed-certs-880603" [ccc2c9f0-0b2c-46e5-bea8-c82e8e3124ec] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 09:17:21.800692  318609 system_pods.go:89] "kube-proxy-k4kcs" [83d1821f-468a-4bf0-8fc0-e40e0668f6ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1018 09:17:21.800699  318609 system_pods.go:89] "kube-scheduler-embed-certs-880603" [5635b7e1-dca9-4a7e-8b9c-aa96067fd707] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 09:17:21.800710  318609 system_pods.go:89] "storage-provisioner" [d2aa7a09-3332-4744-9180-d307b4fc8194] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:17:21.800724  318609 system_pods.go:126] duration metric: took 3.115091ms to wait for k8s-apps to be running ...
	I1018 09:17:21.800737  318609 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 09:17:21.800790  318609 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:17:21.814423  318609 system_svc.go:56] duration metric: took 13.675501ms WaitForService to wait for kubelet
	I1018 09:17:21.814454  318609 kubeadm.go:586] duration metric: took 3.250358327s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:17:21.814478  318609 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:17:21.817948  318609 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 09:17:21.817978  318609 node_conditions.go:123] node cpu capacity is 8
	I1018 09:17:21.817990  318609 node_conditions.go:105] duration metric: took 3.506529ms to run NodePressure ...
	I1018 09:17:21.818000  318609 start.go:241] waiting for startup goroutines ...
	I1018 09:17:21.818007  318609 start.go:246] waiting for cluster config update ...
	I1018 09:17:21.818017  318609 start.go:255] writing updated cluster config ...
	I1018 09:17:21.818287  318609 ssh_runner.go:195] Run: rm -f paused
	I1018 09:17:21.822813  318609 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:17:21.827377  318609 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7fnw7" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 09:17:23.833971  318609 pod_ready.go:104] pod "coredns-66bc5c9577-7fnw7" is not "Ready", error: <nil>
	W1018 09:17:25.835798  318609 pod_ready.go:104] pod "coredns-66bc5c9577-7fnw7" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 18 09:16:41 no-preload-031066 crio[563]: time="2025-10-18T09:16:41.997891046Z" level=info msg="Started container" PID=1720 containerID=f7ce25988f3e487c4e57709df2b469fcf69805297e28027fb1cd63e4f1ce5b33 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fg4h7/dashboard-metrics-scraper id=5a7c8308-d9a7-4faf-9084-e7103d4f7cde name=/runtime.v1.RuntimeService/StartContainer sandboxID=dac843f615cc4fb718665d00a9c20d6d3ec6271e0ca3b70890dab0552b61d73b
	Oct 18 09:16:42 no-preload-031066 crio[563]: time="2025-10-18T09:16:42.954985418Z" level=info msg="Removing container: b9c64c3a59bf38a4fa0fba6b4129a5dcfe61d4f9bf702b16c7b72d5cb86232f6" id=8578d585-d454-4e30-b8c4-825fb5331863 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 09:16:42 no-preload-031066 crio[563]: time="2025-10-18T09:16:42.964953967Z" level=info msg="Removed container b9c64c3a59bf38a4fa0fba6b4129a5dcfe61d4f9bf702b16c7b72d5cb86232f6: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fg4h7/dashboard-metrics-scraper" id=8578d585-d454-4e30-b8c4-825fb5331863 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 09:17:02 no-preload-031066 crio[563]: time="2025-10-18T09:17:02.001297131Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=938f76ca-dab3-4d89-810b-98e70186d7d7 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:17:02 no-preload-031066 crio[563]: time="2025-10-18T09:17:02.002390692Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=4959cebe-ea53-4a28-b596-f105bf746c51 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:17:02 no-preload-031066 crio[563]: time="2025-10-18T09:17:02.003538344Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=ec44ed24-9d44-4197-a729-5c3ce35bd7b6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:17:02 no-preload-031066 crio[563]: time="2025-10-18T09:17:02.003842922Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:17:02 no-preload-031066 crio[563]: time="2025-10-18T09:17:02.008219156Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:17:02 no-preload-031066 crio[563]: time="2025-10-18T09:17:02.00843719Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/ca6907044292ecd6377c46aa1583f9bd92bd14fa993991d2bdea3a406f1aab80/merged/etc/passwd: no such file or directory"
	Oct 18 09:17:02 no-preload-031066 crio[563]: time="2025-10-18T09:17:02.008466256Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/ca6907044292ecd6377c46aa1583f9bd92bd14fa993991d2bdea3a406f1aab80/merged/etc/group: no such file or directory"
	Oct 18 09:17:02 no-preload-031066 crio[563]: time="2025-10-18T09:17:02.008788671Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:17:02 no-preload-031066 crio[563]: time="2025-10-18T09:17:02.034781361Z" level=info msg="Created container 75ce77572e8bf3a989ac18d086a9f8cfbaae21b8f7296e1a9244cd17d037a2e0: kube-system/storage-provisioner/storage-provisioner" id=ec44ed24-9d44-4197-a729-5c3ce35bd7b6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:17:02 no-preload-031066 crio[563]: time="2025-10-18T09:17:02.035485639Z" level=info msg="Starting container: 75ce77572e8bf3a989ac18d086a9f8cfbaae21b8f7296e1a9244cd17d037a2e0" id=fd49361d-b9ac-4094-870c-a434ec299d34 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:17:02 no-preload-031066 crio[563]: time="2025-10-18T09:17:02.037807584Z" level=info msg="Started container" PID=1736 containerID=75ce77572e8bf3a989ac18d086a9f8cfbaae21b8f7296e1a9244cd17d037a2e0 description=kube-system/storage-provisioner/storage-provisioner id=fd49361d-b9ac-4094-870c-a434ec299d34 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b04896f5442c20ef3db6cda9b6c96661b5a753bf6d618df05e6dfdffad43e4d2
	Oct 18 09:17:02 no-preload-031066 crio[563]: time="2025-10-18T09:17:02.847273088Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=0dd2ef85-2575-4c16-b2ac-e870f5339871 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:17:02 no-preload-031066 crio[563]: time="2025-10-18T09:17:02.848259707Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=012dd2ea-7ce8-44f5-aef1-0e2d27c2fc7a name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:17:02 no-preload-031066 crio[563]: time="2025-10-18T09:17:02.849439565Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fg4h7/dashboard-metrics-scraper" id=c80863aa-8008-49b1-b0a2-45b906bbd060 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:17:02 no-preload-031066 crio[563]: time="2025-10-18T09:17:02.849757045Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:17:02 no-preload-031066 crio[563]: time="2025-10-18T09:17:02.856162829Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:17:02 no-preload-031066 crio[563]: time="2025-10-18T09:17:02.856720707Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:17:02 no-preload-031066 crio[563]: time="2025-10-18T09:17:02.89058289Z" level=info msg="Created container b1fe4f4a7ee10d934d38d0876966de29079cb3ec3001c753475493045aa346b0: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fg4h7/dashboard-metrics-scraper" id=c80863aa-8008-49b1-b0a2-45b906bbd060 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:17:02 no-preload-031066 crio[563]: time="2025-10-18T09:17:02.89125079Z" level=info msg="Starting container: b1fe4f4a7ee10d934d38d0876966de29079cb3ec3001c753475493045aa346b0" id=10181bcf-4080-48aa-92b3-048153822875 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:17:02 no-preload-031066 crio[563]: time="2025-10-18T09:17:02.89344226Z" level=info msg="Started container" PID=1750 containerID=b1fe4f4a7ee10d934d38d0876966de29079cb3ec3001c753475493045aa346b0 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fg4h7/dashboard-metrics-scraper id=10181bcf-4080-48aa-92b3-048153822875 name=/runtime.v1.RuntimeService/StartContainer sandboxID=dac843f615cc4fb718665d00a9c20d6d3ec6271e0ca3b70890dab0552b61d73b
	Oct 18 09:17:03 no-preload-031066 crio[563]: time="2025-10-18T09:17:03.00702446Z" level=info msg="Removing container: f7ce25988f3e487c4e57709df2b469fcf69805297e28027fb1cd63e4f1ce5b33" id=0bb066b4-d6d4-4d4a-beec-700116ba5df8 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 09:17:03 no-preload-031066 crio[563]: time="2025-10-18T09:17:03.017306887Z" level=info msg="Removed container f7ce25988f3e487c4e57709df2b469fcf69805297e28027fb1cd63e4f1ce5b33: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fg4h7/dashboard-metrics-scraper" id=0bb066b4-d6d4-4d4a-beec-700116ba5df8 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	b1fe4f4a7ee10       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           25 seconds ago       Exited              dashboard-metrics-scraper   2                   dac843f615cc4       dashboard-metrics-scraper-6ffb444bf9-fg4h7   kubernetes-dashboard
	75ce77572e8bf       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           26 seconds ago       Running             storage-provisioner         1                   b04896f5442c2       storage-provisioner                          kube-system
	e718435c8ea3e       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   49 seconds ago       Running             kubernetes-dashboard        0                   e7b0c437c2f60       kubernetes-dashboard-855c9754f9-z9ksf        kubernetes-dashboard
	52bff8d4511f9       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           57 seconds ago       Running             busybox                     1                   ade36f0d610aa       busybox                                      default
	2dc50ea4d70ff       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           57 seconds ago       Running             coredns                     0                   29f560b5299bd       coredns-66bc5c9577-h44wj                     kube-system
	8935165d381c7       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           57 seconds ago       Running             kube-proxy                  0                   015dca79e7a4e       kube-proxy-jr5qn                             kube-system
	c605782e4c42e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           57 seconds ago       Exited              storage-provisioner         0                   b04896f5442c2       storage-provisioner                          kube-system
	703ddbb4126b1       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           57 seconds ago       Running             kindnet-cni                 0                   88f61b2c80faf       kindnet-k7m9t                                kube-system
	153dd41ff60f4       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           About a minute ago   Running             kube-controller-manager     0                   ffdaa1abc2b2f       kube-controller-manager-no-preload-031066    kube-system
	b51a1224ef6b8       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           About a minute ago   Running             etcd                        0                   456d429fde4b1       etcd-no-preload-031066                       kube-system
	62682de07bbfe       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           About a minute ago   Running             kube-apiserver              0                   98a015178bec6       kube-apiserver-no-preload-031066             kube-system
	db536597b2746       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           About a minute ago   Running             kube-scheduler              0                   ca618beec387d       kube-scheduler-no-preload-031066             kube-system
	
	
	==> coredns [2dc50ea4d70ff173a36533c407b63be08c1ba2f027b2e06301f77dc0a6e2fb65] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34692 - 46868 "HINFO IN 7617922424595391023.677625672438370840. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.485934365s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-031066
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-031066
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a39cecdc22b5fb611b15c7501c7459c3b4d2820
	                    minikube.k8s.io/name=no-preload-031066
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T09_15_33_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 09:15:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-031066
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 09:17:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 09:17:21 +0000   Sat, 18 Oct 2025 09:15:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 09:17:21 +0000   Sat, 18 Oct 2025 09:15:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 09:17:21 +0000   Sat, 18 Oct 2025 09:15:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 09:17:21 +0000   Sat, 18 Oct 2025 09:15:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-031066
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                01d62f53-a2fc-4f1d-88c2-abcb9799608b
	  Boot ID:                    e8d7ef1f-87bb-488c-8381-e18fe85b484f
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-66bc5c9577-h44wj                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     111s
	  kube-system                 etcd-no-preload-031066                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         116s
	  kube-system                 kindnet-k7m9t                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-no-preload-031066              250m (3%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-no-preload-031066     200m (2%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-jr5qn                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-no-preload-031066              100m (1%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-fg4h7    0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-z9ksf         0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 110s               kube-proxy       
	  Normal  Starting                 57s                kube-proxy       
	  Normal  NodeHasSufficientMemory  116s               kubelet          Node no-preload-031066 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    116s               kubelet          Node no-preload-031066 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     116s               kubelet          Node no-preload-031066 status is now: NodeHasSufficientPID
	  Normal  Starting                 116s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           112s               node-controller  Node no-preload-031066 event: Registered Node no-preload-031066 in Controller
	  Normal  NodeReady                98s                kubelet          Node no-preload-031066 status is now: NodeReady
	  Normal  Starting                 61s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  61s (x8 over 61s)  kubelet          Node no-preload-031066 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s (x8 over 61s)  kubelet          Node no-preload-031066 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x8 over 61s)  kubelet          Node no-preload-031066 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           54s                node-controller  Node no-preload-031066 event: Registered Node no-preload-031066 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3e 73 e0 1e 4f 5c 08 06
	[  +0.001176] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9e 01 6a be c1 ed 08 06
	[  +1.096145] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 92 07 d0 c5 bc 08 06
	[  +0.000393] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff de 8d 0a a3 cc 78 08 06
	[ +17.591772] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 8a 16 36 e8 43 c0 08 06
	[  +0.000413] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3e 73 e0 1e 4f 5c 08 06
	[ +11.820741] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 36 5b 8e 46 ea 47 08 06
	[Oct18 09:15] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 28 86 11 d4 e9 08 06
	[  +0.032974] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 76 2d 83 26 2e 28 08 06
	[  +4.435535] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 22 e2 07 5a 3b 4a 08 06
	[  +0.000382] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 36 5b 8e 46 ea 47 08 06
	[ +43.809014] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 22 6f 4b 2b 7f 46 08 06
	[  +0.000367] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 62 28 86 11 d4 e9 08 06
	
	
	==> etcd [b51a1224ef6b876bc35ce20f2366f94525e300e4432dff8348abbde915ade5af] <==
	{"level":"warn","ts":"2025-10-18T09:16:29.791148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:29.799242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:29.807660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:29.817100Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:29.831947Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:29.842016Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:29.850838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:29.860303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:29.869529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:29.879662Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:29.888839Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:29.899557Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:29.908881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:29.916788Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:29.924602Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:29.933229Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:29.941990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:29.950850Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:29.959311Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:29.967690Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:29.983045Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:29.992452Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:30.001468Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:30.075074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33554","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T09:17:10.721814Z","caller":"traceutil/trace.go:172","msg":"trace[1423453140] transaction","detail":"{read_only:false; response_revision:627; number_of_response:1; }","duration":"154.667761ms","start":"2025-10-18T09:17:10.567124Z","end":"2025-10-18T09:17:10.721791Z","steps":["trace[1423453140] 'process raft request'  (duration: 153.789233ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:17:28 up 59 min,  0 user,  load average: 5.27, 3.82, 2.52
	Linux no-preload-031066 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [703ddbb4126b1f1be32f6c0f727cee37f39cb83a922d2a063922f5b314414d37] <==
	I1018 09:16:31.539068       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 09:16:31.539356       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1018 09:16:31.539578       1 main.go:148] setting mtu 1500 for CNI 
	I1018 09:16:31.539600       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 09:16:31.539627       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T09:16:31Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 09:16:31.837720       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 09:16:31.839640       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 09:16:31.839673       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 09:16:31.839839       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 09:16:32.251296       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 09:16:32.251331       1 metrics.go:72] Registering metrics
	I1018 09:16:32.251447       1 controller.go:711] "Syncing nftables rules"
	I1018 09:16:41.754506       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 09:16:41.754598       1 main.go:301] handling current node
	I1018 09:16:51.757528       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 09:16:51.757589       1 main.go:301] handling current node
	I1018 09:17:01.754603       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 09:17:01.754642       1 main.go:301] handling current node
	I1018 09:17:11.754136       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 09:17:11.754177       1 main.go:301] handling current node
	I1018 09:17:21.754089       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 09:17:21.754127       1 main.go:301] handling current node
	
	
	==> kube-apiserver [62682de07bbfeb0d0f0c6405121566236410d571314651f369c15f65938b548a] <==
	I1018 09:16:30.746915       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1018 09:16:30.749291       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1018 09:16:30.749371       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1018 09:16:30.749446       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1018 09:16:30.752332       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 09:16:30.752367       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1018 09:16:30.753140       1 policy_source.go:240] refreshing policies
	I1018 09:16:30.753489       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1018 09:16:30.753550       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 09:16:30.753730       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1018 09:16:30.761751       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1018 09:16:30.777627       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 09:16:30.788208       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 09:16:30.810626       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:16:30.997582       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 09:16:31.248216       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 09:16:31.314520       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 09:16:31.348229       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 09:16:31.361875       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 09:16:31.436550       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.56.127"}
	I1018 09:16:31.454383       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.60.206"}
	I1018 09:16:31.643763       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 09:16:34.661050       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 09:16:34.710751       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 09:16:34.810319       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [153dd41ff60f495d247d4bd42054dd9255c2fe5ccbc173f31021152a50b30308] <==
	I1018 09:16:34.207203       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 09:16:34.207211       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 09:16:34.207219       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 09:16:34.207445       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 09:16:34.207703       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 09:16:34.208643       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 09:16:34.208656       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 09:16:34.208675       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 09:16:34.208836       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 09:16:34.209789       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 09:16:34.209814       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 09:16:34.209839       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 09:16:34.213425       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1018 09:16:34.215735       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:16:34.215750       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 09:16:34.216933       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 09:16:34.219125       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1018 09:16:34.222404       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 09:16:34.222526       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 09:16:34.222673       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-031066"
	I1018 09:16:34.222734       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1018 09:16:34.225782       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 09:16:34.228724       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 09:16:34.232083       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 09:16:34.236475       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [8935165d381c76e0adbf1b4796ec6dacb8a681c43afb77e2bac74597041759ac] <==
	I1018 09:16:31.336014       1 server_linux.go:53] "Using iptables proxy"
	I1018 09:16:31.407028       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 09:16:31.507489       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 09:16:31.508172       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1018 09:16:31.508451       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 09:16:31.536271       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 09:16:31.536370       1 server_linux.go:132] "Using iptables Proxier"
	I1018 09:16:31.543686       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 09:16:31.544303       1 server.go:527] "Version info" version="v1.34.1"
	I1018 09:16:31.544440       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:16:31.548173       1 config.go:200] "Starting service config controller"
	I1018 09:16:31.548206       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 09:16:31.548239       1 config.go:106] "Starting endpoint slice config controller"
	I1018 09:16:31.548244       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 09:16:31.548289       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 09:16:31.548299       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 09:16:31.548448       1 config.go:309] "Starting node config controller"
	I1018 09:16:31.548537       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 09:16:31.648833       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 09:16:31.648938       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 09:16:31.648957       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 09:16:31.648969       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [db536597b2746191742cfa1b8df28f2fe3935b9a553d5543f993db2773c9f6a1] <==
	I1018 09:16:28.900193       1 serving.go:386] Generated self-signed cert in-memory
	W1018 09:16:30.687495       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 09:16:30.687530       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 09:16:30.687544       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 09:16:30.687554       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 09:16:30.726825       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 09:16:30.726877       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:16:30.735804       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:16:30.735849       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:16:30.738454       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 09:16:30.738548       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 09:16:30.836301       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 09:16:34 no-preload-031066 kubelet[708]: I1018 09:16:34.917187     708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/7bd3024a-a71b-4103-8169-ebb260c80af3-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-fg4h7\" (UID: \"7bd3024a-a71b-4103-8169-ebb260c80af3\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fg4h7"
	Oct 18 09:16:34 no-preload-031066 kubelet[708]: I1018 09:16:34.917271     708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjxd5\" (UniqueName: \"kubernetes.io/projected/7bd3024a-a71b-4103-8169-ebb260c80af3-kube-api-access-tjxd5\") pod \"dashboard-metrics-scraper-6ffb444bf9-fg4h7\" (UID: \"7bd3024a-a71b-4103-8169-ebb260c80af3\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fg4h7"
	Oct 18 09:16:41 no-preload-031066 kubelet[708]: I1018 09:16:41.099118     708 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 18 09:16:41 no-preload-031066 kubelet[708]: I1018 09:16:41.949048     708 scope.go:117] "RemoveContainer" containerID="b9c64c3a59bf38a4fa0fba6b4129a5dcfe61d4f9bf702b16c7b72d5cb86232f6"
	Oct 18 09:16:41 no-preload-031066 kubelet[708]: I1018 09:16:41.965225     708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-z9ksf" podStartSLOduration=3.780999331 podStartE2EDuration="7.965201277s" podCreationTimestamp="2025-10-18 09:16:34 +0000 UTC" firstStartedPulling="2025-10-18 09:16:35.103089003 +0000 UTC m=+7.358958911" lastFinishedPulling="2025-10-18 09:16:39.28729095 +0000 UTC m=+11.543160857" observedRunningTime="2025-10-18 09:16:39.95502353 +0000 UTC m=+12.210893458" watchObservedRunningTime="2025-10-18 09:16:41.965201277 +0000 UTC m=+14.221071290"
	Oct 18 09:16:42 no-preload-031066 kubelet[708]: I1018 09:16:42.953497     708 scope.go:117] "RemoveContainer" containerID="b9c64c3a59bf38a4fa0fba6b4129a5dcfe61d4f9bf702b16c7b72d5cb86232f6"
	Oct 18 09:16:42 no-preload-031066 kubelet[708]: I1018 09:16:42.953625     708 scope.go:117] "RemoveContainer" containerID="f7ce25988f3e487c4e57709df2b469fcf69805297e28027fb1cd63e4f1ce5b33"
	Oct 18 09:16:42 no-preload-031066 kubelet[708]: E1018 09:16:42.953842     708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fg4h7_kubernetes-dashboard(7bd3024a-a71b-4103-8169-ebb260c80af3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fg4h7" podUID="7bd3024a-a71b-4103-8169-ebb260c80af3"
	Oct 18 09:16:43 no-preload-031066 kubelet[708]: I1018 09:16:43.957284     708 scope.go:117] "RemoveContainer" containerID="f7ce25988f3e487c4e57709df2b469fcf69805297e28027fb1cd63e4f1ce5b33"
	Oct 18 09:16:43 no-preload-031066 kubelet[708]: E1018 09:16:43.957512     708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fg4h7_kubernetes-dashboard(7bd3024a-a71b-4103-8169-ebb260c80af3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fg4h7" podUID="7bd3024a-a71b-4103-8169-ebb260c80af3"
	Oct 18 09:16:50 no-preload-031066 kubelet[708]: I1018 09:16:50.559559     708 scope.go:117] "RemoveContainer" containerID="f7ce25988f3e487c4e57709df2b469fcf69805297e28027fb1cd63e4f1ce5b33"
	Oct 18 09:16:50 no-preload-031066 kubelet[708]: E1018 09:16:50.559818     708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fg4h7_kubernetes-dashboard(7bd3024a-a71b-4103-8169-ebb260c80af3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fg4h7" podUID="7bd3024a-a71b-4103-8169-ebb260c80af3"
	Oct 18 09:17:02 no-preload-031066 kubelet[708]: I1018 09:17:02.000912     708 scope.go:117] "RemoveContainer" containerID="c605782e4c42eb91b018538170fe921def3b1396402bc03eee1dbce4f4af6a69"
	Oct 18 09:17:02 no-preload-031066 kubelet[708]: I1018 09:17:02.846721     708 scope.go:117] "RemoveContainer" containerID="f7ce25988f3e487c4e57709df2b469fcf69805297e28027fb1cd63e4f1ce5b33"
	Oct 18 09:17:03 no-preload-031066 kubelet[708]: I1018 09:17:03.005593     708 scope.go:117] "RemoveContainer" containerID="f7ce25988f3e487c4e57709df2b469fcf69805297e28027fb1cd63e4f1ce5b33"
	Oct 18 09:17:03 no-preload-031066 kubelet[708]: I1018 09:17:03.005831     708 scope.go:117] "RemoveContainer" containerID="b1fe4f4a7ee10d934d38d0876966de29079cb3ec3001c753475493045aa346b0"
	Oct 18 09:17:03 no-preload-031066 kubelet[708]: E1018 09:17:03.006034     708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fg4h7_kubernetes-dashboard(7bd3024a-a71b-4103-8169-ebb260c80af3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fg4h7" podUID="7bd3024a-a71b-4103-8169-ebb260c80af3"
	Oct 18 09:17:10 no-preload-031066 kubelet[708]: I1018 09:17:10.559792     708 scope.go:117] "RemoveContainer" containerID="b1fe4f4a7ee10d934d38d0876966de29079cb3ec3001c753475493045aa346b0"
	Oct 18 09:17:10 no-preload-031066 kubelet[708]: E1018 09:17:10.560069     708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fg4h7_kubernetes-dashboard(7bd3024a-a71b-4103-8169-ebb260c80af3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fg4h7" podUID="7bd3024a-a71b-4103-8169-ebb260c80af3"
	Oct 18 09:17:22 no-preload-031066 kubelet[708]: I1018 09:17:22.845893     708 scope.go:117] "RemoveContainer" containerID="b1fe4f4a7ee10d934d38d0876966de29079cb3ec3001c753475493045aa346b0"
	Oct 18 09:17:22 no-preload-031066 kubelet[708]: E1018 09:17:22.846137     708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fg4h7_kubernetes-dashboard(7bd3024a-a71b-4103-8169-ebb260c80af3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fg4h7" podUID="7bd3024a-a71b-4103-8169-ebb260c80af3"
	Oct 18 09:17:25 no-preload-031066 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 09:17:25 no-preload-031066 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 09:17:25 no-preload-031066 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 18 09:17:25 no-preload-031066 systemd[1]: kubelet.service: Consumed 1.866s CPU time.
	
	
	==> kubernetes-dashboard [e718435c8ea3ef9ed304f9cc405a3feced7a46aa8145c5c913dda9eee2bbfb61] <==
	2025/10/18 09:16:39 Starting overwatch
	2025/10/18 09:16:39 Using namespace: kubernetes-dashboard
	2025/10/18 09:16:39 Using in-cluster config to connect to apiserver
	2025/10/18 09:16:39 Using secret token for csrf signing
	2025/10/18 09:16:39 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 09:16:39 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 09:16:39 Successful initial request to the apiserver, version: v1.34.1
	2025/10/18 09:16:39 Generating JWE encryption key
	2025/10/18 09:16:39 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 09:16:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 09:16:39 Initializing JWE encryption key from synchronized object
	2025/10/18 09:16:39 Creating in-cluster Sidecar client
	2025/10/18 09:16:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 09:16:39 Serving insecurely on HTTP port: 9090
	2025/10/18 09:17:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [75ce77572e8bf3a989ac18d086a9f8cfbaae21b8f7296e1a9244cd17d037a2e0] <==
	I1018 09:17:02.052411       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 09:17:02.061267       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 09:17:02.061334       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 09:17:02.065498       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:05.521707       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:09.782580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:13.381866       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:16.436646       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:19.459680       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:19.470851       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 09:17:19.471035       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 09:17:19.471180       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4d61b21d-ca88-4508-8d32-276d0fdbca79", APIVersion:"v1", ResourceVersion:"637", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-031066_d0cdd423-e8c4-49e0-9a4c-c57eb38b4aab became leader
	I1018 09:17:19.471261       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-031066_d0cdd423-e8c4-49e0-9a4c-c57eb38b4aab!
	W1018 09:17:19.473864       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:19.480452       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 09:17:19.571907       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-031066_d0cdd423-e8c4-49e0-9a4c-c57eb38b4aab!
	W1018 09:17:21.484055       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:21.489152       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:23.492232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:23.496913       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:25.501775       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:25.517015       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:27.531494       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:27.542853       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [c605782e4c42eb91b018538170fe921def3b1396402bc03eee1dbce4f4af6a69] <==
	I1018 09:16:31.284759       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 09:17:01.289100       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-031066 -n no-preload-031066
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-031066 -n no-preload-031066: exit status 2 (491.745676ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-031066 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-031066
helpers_test.go:243: (dbg) docker inspect no-preload-031066:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dce899f902aef3d6f89585b10fccab0c498a8e85a102773c30f2d6dc5ea3fab0",
	        "Created": "2025-10-18T09:14:59.840380685Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 309639,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T09:16:21.509995917Z",
	            "FinishedAt": "2025-10-18T09:16:20.67977806Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/dce899f902aef3d6f89585b10fccab0c498a8e85a102773c30f2d6dc5ea3fab0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dce899f902aef3d6f89585b10fccab0c498a8e85a102773c30f2d6dc5ea3fab0/hostname",
	        "HostsPath": "/var/lib/docker/containers/dce899f902aef3d6f89585b10fccab0c498a8e85a102773c30f2d6dc5ea3fab0/hosts",
	        "LogPath": "/var/lib/docker/containers/dce899f902aef3d6f89585b10fccab0c498a8e85a102773c30f2d6dc5ea3fab0/dce899f902aef3d6f89585b10fccab0c498a8e85a102773c30f2d6dc5ea3fab0-json.log",
	        "Name": "/no-preload-031066",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-031066:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-031066",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dce899f902aef3d6f89585b10fccab0c498a8e85a102773c30f2d6dc5ea3fab0",
	                "LowerDir": "/var/lib/docker/overlay2/1e0e83685e550417cddd524d2d8b786a0c193a25b235b1df64d1bc4562ba00b1-init/diff:/var/lib/docker/overlay2/76f783f469ac4c930bc111d7df4bd2b3a57bdcd762971c7ce0ba7a7b959771a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1e0e83685e550417cddd524d2d8b786a0c193a25b235b1df64d1bc4562ba00b1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1e0e83685e550417cddd524d2d8b786a0c193a25b235b1df64d1bc4562ba00b1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1e0e83685e550417cddd524d2d8b786a0c193a25b235b1df64d1bc4562ba00b1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-031066",
	                "Source": "/var/lib/docker/volumes/no-preload-031066/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-031066",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-031066",
	                "name.minikube.sigs.k8s.io": "no-preload-031066",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cf6d3adbad74b83d7f67e9fbb4f0d081850f00a62b7124d9478bf4c4cb90b469",
	            "SandboxKey": "/var/run/docker/netns/cf6d3adbad74",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-031066": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:da:e3:74:b5:7a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "659f168a65764f8b90baada540d0c1e70a7a90e0cd6e43139115c0a2c2f0c906",
	                    "EndpointID": "a928894f00caa7cff351765b3b30caf9e8449171543c306c7c567236d4be4067",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-031066",
	                        "dce899f902ae"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-031066 -n no-preload-031066
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-031066 -n no-preload-031066: exit status 2 (404.859812ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-031066 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-031066 logs -n 25: (1.423878343s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p enable-default-cni-448954 sudo containerd config dump                                                                                                                                                                                      │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ ssh     │ -p enable-default-cni-448954 sudo systemctl status crio --all --full --no-pager                                                                                                                                                               │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ ssh     │ -p enable-default-cni-448954 sudo systemctl cat crio --no-pager                                                                                                                                                                               │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ ssh     │ -p enable-default-cni-448954 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                     │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ ssh     │ -p enable-default-cni-448954 sudo crio config                                                                                                                                                                                                 │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ delete  │ -p enable-default-cni-448954                                                                                                                                                                                                                  │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ delete  │ -p disable-driver-mounts-634520                                                                                                                                                                                                               │ disable-driver-mounts-634520 │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ start   │ -p default-k8s-diff-port-986220 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-986220 │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:17 UTC │
	│ addons  │ enable dashboard -p no-preload-031066 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-031066            │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ start   │ -p no-preload-031066 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-031066            │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:17 UTC │
	│ addons  │ enable metrics-server -p embed-certs-880603 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-880603           │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │                     │
	│ stop    │ -p embed-certs-880603 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-880603           │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:17 UTC │
	│ image   │ old-k8s-version-951975 image list --format=json                                                                                                                                                                                               │ old-k8s-version-951975       │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ pause   │ -p old-k8s-version-951975 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-951975       │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │                     │
	│ delete  │ -p old-k8s-version-951975                                                                                                                                                                                                                     │ old-k8s-version-951975       │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ delete  │ -p old-k8s-version-951975                                                                                                                                                                                                                     │ old-k8s-version-951975       │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ start   │ -p newest-cni-444637 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-444637            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-986220 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-986220 │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-880603 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-880603           │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ start   │ -p embed-certs-880603 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-880603           │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-986220 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-986220 │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ image   │ no-preload-031066 image list --format=json                                                                                                                                                                                                    │ no-preload-031066            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ pause   │ -p no-preload-031066 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-031066            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-986220 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-986220 │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ start   │ -p default-k8s-diff-port-986220 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-986220 │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 09:17:29
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 09:17:29.203727  324191 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:17:29.204025  324191 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:17:29.204037  324191 out.go:374] Setting ErrFile to fd 2...
	I1018 09:17:29.204044  324191 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:17:29.204391  324191 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-5897/.minikube/bin
	I1018 09:17:29.205672  324191 out.go:368] Setting JSON to false
	I1018 09:17:29.207338  324191 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3597,"bootTime":1760775452,"procs":334,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 09:17:29.208225  324191 start.go:141] virtualization: kvm guest
	I1018 09:17:29.210154  324191 out.go:179] * [default-k8s-diff-port-986220] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 09:17:29.212073  324191 notify.go:220] Checking for updates...
	I1018 09:17:29.212110  324191 out.go:179]   - MINIKUBE_LOCATION=21767
	I1018 09:17:29.213987  324191 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:17:29.215657  324191 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-5897/kubeconfig
	I1018 09:17:29.220405  324191 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-5897/.minikube
	I1018 09:17:29.222335  324191 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 09:17:29.223846  324191 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:17:29.225965  324191 config.go:182] Loaded profile config "default-k8s-diff-port-986220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:17:29.226616  324191 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:17:29.263938  324191 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 09:17:29.264072  324191 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:17:29.360716  324191 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-18 09:17:29.344884819 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:17:29.360863  324191 docker.go:318] overlay module found
	I1018 09:17:29.363402  324191 out.go:179] * Using the docker driver based on existing profile
	I1018 09:17:29.364231  324191 start.go:305] selected driver: docker
	I1018 09:17:29.364250  324191 start.go:925] validating driver "docker" against &{Name:default-k8s-diff-port-986220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-986220 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:17:29.364396  324191 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:17:29.365379  324191 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:17:29.461136  324191 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-18 09:17:29.441495112 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:17:29.461592  324191 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:17:29.461647  324191 cni.go:84] Creating CNI manager for ""
	I1018 09:17:29.461742  324191 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:17:29.461798  324191 start.go:349] cluster config:
	{Name:default-k8s-diff-port-986220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-986220 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:17:29.464356  324191 out.go:179] * Starting "default-k8s-diff-port-986220" primary control-plane node in "default-k8s-diff-port-986220" cluster
	I1018 09:17:29.468800  324191 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 09:17:29.470317  324191 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 09:17:29.473065  324191 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:17:29.473150  324191 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-5897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 09:17:29.473181  324191 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 09:17:29.473186  324191 cache.go:58] Caching tarball of preloaded images
	I1018 09:17:29.473380  324191 preload.go:233] Found /home/jenkins/minikube-integration/21767-5897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 09:17:29.473395  324191 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 09:17:29.473541  324191 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/config.json ...
	I1018 09:17:29.508320  324191 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 09:17:29.508369  324191 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 09:17:29.508390  324191 cache.go:232] Successfully downloaded all kic artifacts
	I1018 09:17:29.508421  324191 start.go:360] acquireMachinesLock for default-k8s-diff-port-986220: {Name:mkb47939fe80a3621f1854992111e30a9beab56d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:17:29.508515  324191 start.go:364] duration metric: took 74.575µs to acquireMachinesLock for "default-k8s-diff-port-986220"
	I1018 09:17:29.508535  324191 start.go:96] Skipping create...Using existing machine configuration
	I1018 09:17:29.508542  324191 fix.go:54] fixHost starting: 
	I1018 09:17:29.508868  324191 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-986220 --format={{.State.Status}}
	I1018 09:17:29.538651  324191 fix.go:112] recreateIfNeeded on default-k8s-diff-port-986220: state=Stopped err=<nil>
	W1018 09:17:29.538691  324191 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Oct 18 09:16:41 no-preload-031066 crio[563]: time="2025-10-18T09:16:41.997891046Z" level=info msg="Started container" PID=1720 containerID=f7ce25988f3e487c4e57709df2b469fcf69805297e28027fb1cd63e4f1ce5b33 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fg4h7/dashboard-metrics-scraper id=5a7c8308-d9a7-4faf-9084-e7103d4f7cde name=/runtime.v1.RuntimeService/StartContainer sandboxID=dac843f615cc4fb718665d00a9c20d6d3ec6271e0ca3b70890dab0552b61d73b
	Oct 18 09:16:42 no-preload-031066 crio[563]: time="2025-10-18T09:16:42.954985418Z" level=info msg="Removing container: b9c64c3a59bf38a4fa0fba6b4129a5dcfe61d4f9bf702b16c7b72d5cb86232f6" id=8578d585-d454-4e30-b8c4-825fb5331863 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 09:16:42 no-preload-031066 crio[563]: time="2025-10-18T09:16:42.964953967Z" level=info msg="Removed container b9c64c3a59bf38a4fa0fba6b4129a5dcfe61d4f9bf702b16c7b72d5cb86232f6: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fg4h7/dashboard-metrics-scraper" id=8578d585-d454-4e30-b8c4-825fb5331863 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 09:17:02 no-preload-031066 crio[563]: time="2025-10-18T09:17:02.001297131Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=938f76ca-dab3-4d89-810b-98e70186d7d7 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:17:02 no-preload-031066 crio[563]: time="2025-10-18T09:17:02.002390692Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=4959cebe-ea53-4a28-b596-f105bf746c51 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:17:02 no-preload-031066 crio[563]: time="2025-10-18T09:17:02.003538344Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=ec44ed24-9d44-4197-a729-5c3ce35bd7b6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:17:02 no-preload-031066 crio[563]: time="2025-10-18T09:17:02.003842922Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:17:02 no-preload-031066 crio[563]: time="2025-10-18T09:17:02.008219156Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:17:02 no-preload-031066 crio[563]: time="2025-10-18T09:17:02.00843719Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/ca6907044292ecd6377c46aa1583f9bd92bd14fa993991d2bdea3a406f1aab80/merged/etc/passwd: no such file or directory"
	Oct 18 09:17:02 no-preload-031066 crio[563]: time="2025-10-18T09:17:02.008466256Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/ca6907044292ecd6377c46aa1583f9bd92bd14fa993991d2bdea3a406f1aab80/merged/etc/group: no such file or directory"
	Oct 18 09:17:02 no-preload-031066 crio[563]: time="2025-10-18T09:17:02.008788671Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:17:02 no-preload-031066 crio[563]: time="2025-10-18T09:17:02.034781361Z" level=info msg="Created container 75ce77572e8bf3a989ac18d086a9f8cfbaae21b8f7296e1a9244cd17d037a2e0: kube-system/storage-provisioner/storage-provisioner" id=ec44ed24-9d44-4197-a729-5c3ce35bd7b6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:17:02 no-preload-031066 crio[563]: time="2025-10-18T09:17:02.035485639Z" level=info msg="Starting container: 75ce77572e8bf3a989ac18d086a9f8cfbaae21b8f7296e1a9244cd17d037a2e0" id=fd49361d-b9ac-4094-870c-a434ec299d34 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:17:02 no-preload-031066 crio[563]: time="2025-10-18T09:17:02.037807584Z" level=info msg="Started container" PID=1736 containerID=75ce77572e8bf3a989ac18d086a9f8cfbaae21b8f7296e1a9244cd17d037a2e0 description=kube-system/storage-provisioner/storage-provisioner id=fd49361d-b9ac-4094-870c-a434ec299d34 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b04896f5442c20ef3db6cda9b6c96661b5a753bf6d618df05e6dfdffad43e4d2
	Oct 18 09:17:02 no-preload-031066 crio[563]: time="2025-10-18T09:17:02.847273088Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=0dd2ef85-2575-4c16-b2ac-e870f5339871 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:17:02 no-preload-031066 crio[563]: time="2025-10-18T09:17:02.848259707Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=012dd2ea-7ce8-44f5-aef1-0e2d27c2fc7a name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:17:02 no-preload-031066 crio[563]: time="2025-10-18T09:17:02.849439565Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fg4h7/dashboard-metrics-scraper" id=c80863aa-8008-49b1-b0a2-45b906bbd060 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:17:02 no-preload-031066 crio[563]: time="2025-10-18T09:17:02.849757045Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:17:02 no-preload-031066 crio[563]: time="2025-10-18T09:17:02.856162829Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:17:02 no-preload-031066 crio[563]: time="2025-10-18T09:17:02.856720707Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:17:02 no-preload-031066 crio[563]: time="2025-10-18T09:17:02.89058289Z" level=info msg="Created container b1fe4f4a7ee10d934d38d0876966de29079cb3ec3001c753475493045aa346b0: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fg4h7/dashboard-metrics-scraper" id=c80863aa-8008-49b1-b0a2-45b906bbd060 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:17:02 no-preload-031066 crio[563]: time="2025-10-18T09:17:02.89125079Z" level=info msg="Starting container: b1fe4f4a7ee10d934d38d0876966de29079cb3ec3001c753475493045aa346b0" id=10181bcf-4080-48aa-92b3-048153822875 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:17:02 no-preload-031066 crio[563]: time="2025-10-18T09:17:02.89344226Z" level=info msg="Started container" PID=1750 containerID=b1fe4f4a7ee10d934d38d0876966de29079cb3ec3001c753475493045aa346b0 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fg4h7/dashboard-metrics-scraper id=10181bcf-4080-48aa-92b3-048153822875 name=/runtime.v1.RuntimeService/StartContainer sandboxID=dac843f615cc4fb718665d00a9c20d6d3ec6271e0ca3b70890dab0552b61d73b
	Oct 18 09:17:03 no-preload-031066 crio[563]: time="2025-10-18T09:17:03.00702446Z" level=info msg="Removing container: f7ce25988f3e487c4e57709df2b469fcf69805297e28027fb1cd63e4f1ce5b33" id=0bb066b4-d6d4-4d4a-beec-700116ba5df8 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 09:17:03 no-preload-031066 crio[563]: time="2025-10-18T09:17:03.017306887Z" level=info msg="Removed container f7ce25988f3e487c4e57709df2b469fcf69805297e28027fb1cd63e4f1ce5b33: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fg4h7/dashboard-metrics-scraper" id=0bb066b4-d6d4-4d4a-beec-700116ba5df8 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	b1fe4f4a7ee10       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           28 seconds ago       Exited              dashboard-metrics-scraper   2                   dac843f615cc4       dashboard-metrics-scraper-6ffb444bf9-fg4h7   kubernetes-dashboard
	75ce77572e8bf       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           29 seconds ago       Running             storage-provisioner         1                   b04896f5442c2       storage-provisioner                          kube-system
	e718435c8ea3e       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   52 seconds ago       Running             kubernetes-dashboard        0                   e7b0c437c2f60       kubernetes-dashboard-855c9754f9-z9ksf        kubernetes-dashboard
	52bff8d4511f9       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           About a minute ago   Running             busybox                     1                   ade36f0d610aa       busybox                                      default
	2dc50ea4d70ff       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           About a minute ago   Running             coredns                     0                   29f560b5299bd       coredns-66bc5c9577-h44wj                     kube-system
	8935165d381c7       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           About a minute ago   Running             kube-proxy                  0                   015dca79e7a4e       kube-proxy-jr5qn                             kube-system
	c605782e4c42e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           About a minute ago   Exited              storage-provisioner         0                   b04896f5442c2       storage-provisioner                          kube-system
	703ddbb4126b1       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           About a minute ago   Running             kindnet-cni                 0                   88f61b2c80faf       kindnet-k7m9t                                kube-system
	153dd41ff60f4       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           About a minute ago   Running             kube-controller-manager     0                   ffdaa1abc2b2f       kube-controller-manager-no-preload-031066    kube-system
	b51a1224ef6b8       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           About a minute ago   Running             etcd                        0                   456d429fde4b1       etcd-no-preload-031066                       kube-system
	62682de07bbfe       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           About a minute ago   Running             kube-apiserver              0                   98a015178bec6       kube-apiserver-no-preload-031066             kube-system
	db536597b2746       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           About a minute ago   Running             kube-scheduler              0                   ca618beec387d       kube-scheduler-no-preload-031066             kube-system
	
	
	==> coredns [2dc50ea4d70ff173a36533c407b63be08c1ba2f027b2e06301f77dc0a6e2fb65] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34692 - 46868 "HINFO IN 7617922424595391023.677625672438370840. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.485934365s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-031066
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-031066
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a39cecdc22b5fb611b15c7501c7459c3b4d2820
	                    minikube.k8s.io/name=no-preload-031066
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T09_15_33_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 09:15:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-031066
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 09:17:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 09:17:21 +0000   Sat, 18 Oct 2025 09:15:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 09:17:21 +0000   Sat, 18 Oct 2025 09:15:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 09:17:21 +0000   Sat, 18 Oct 2025 09:15:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 09:17:21 +0000   Sat, 18 Oct 2025 09:15:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-031066
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                01d62f53-a2fc-4f1d-88c2-abcb9799608b
	  Boot ID:                    e8d7ef1f-87bb-488c-8381-e18fe85b484f
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 coredns-66bc5c9577-h44wj                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     114s
	  kube-system                 etcd-no-preload-031066                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         119s
	  kube-system                 kindnet-k7m9t                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      114s
	  kube-system                 kube-apiserver-no-preload-031066              250m (3%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-controller-manager-no-preload-031066     200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-proxy-jr5qn                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-scheduler-no-preload-031066              100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-fg4h7    0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-z9ksf         0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 113s               kube-proxy       
	  Normal  Starting                 60s                kube-proxy       
	  Normal  NodeHasSufficientMemory  119s               kubelet          Node no-preload-031066 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    119s               kubelet          Node no-preload-031066 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     119s               kubelet          Node no-preload-031066 status is now: NodeHasSufficientPID
	  Normal  Starting                 119s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           115s               node-controller  Node no-preload-031066 event: Registered Node no-preload-031066 in Controller
	  Normal  NodeReady                101s               kubelet          Node no-preload-031066 status is now: NodeReady
	  Normal  Starting                 64s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  64s (x8 over 64s)  kubelet          Node no-preload-031066 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    64s (x8 over 64s)  kubelet          Node no-preload-031066 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     64s (x8 over 64s)  kubelet          Node no-preload-031066 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           57s                node-controller  Node no-preload-031066 event: Registered Node no-preload-031066 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3e 73 e0 1e 4f 5c 08 06
	[  +0.001176] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9e 01 6a be c1 ed 08 06
	[  +1.096145] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 92 07 d0 c5 bc 08 06
	[  +0.000393] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff de 8d 0a a3 cc 78 08 06
	[ +17.591772] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 8a 16 36 e8 43 c0 08 06
	[  +0.000413] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3e 73 e0 1e 4f 5c 08 06
	[ +11.820741] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 36 5b 8e 46 ea 47 08 06
	[Oct18 09:15] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 28 86 11 d4 e9 08 06
	[  +0.032974] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 76 2d 83 26 2e 28 08 06
	[  +4.435535] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 22 e2 07 5a 3b 4a 08 06
	[  +0.000382] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 36 5b 8e 46 ea 47 08 06
	[ +43.809014] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 22 6f 4b 2b 7f 46 08 06
	[  +0.000367] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 62 28 86 11 d4 e9 08 06
	
	
	==> etcd [b51a1224ef6b876bc35ce20f2366f94525e300e4432dff8348abbde915ade5af] <==
	{"level":"warn","ts":"2025-10-18T09:16:29.791148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:29.799242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:29.807660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:29.817100Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:29.831947Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:29.842016Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:29.850838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:29.860303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:29.869529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:29.879662Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:29.888839Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:29.899557Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:29.908881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:29.916788Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:29.924602Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:29.933229Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:29.941990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:29.950850Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:29.959311Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:29.967690Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:29.983045Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:29.992452Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:30.001468Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:16:30.075074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33554","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T09:17:10.721814Z","caller":"traceutil/trace.go:172","msg":"trace[1423453140] transaction","detail":"{read_only:false; response_revision:627; number_of_response:1; }","duration":"154.667761ms","start":"2025-10-18T09:17:10.567124Z","end":"2025-10-18T09:17:10.721791Z","steps":["trace[1423453140] 'process raft request'  (duration: 153.789233ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:17:31 up 59 min,  0 user,  load average: 5.27, 3.82, 2.52
	Linux no-preload-031066 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [703ddbb4126b1f1be32f6c0f727cee37f39cb83a922d2a063922f5b314414d37] <==
	I1018 09:16:31.539068       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 09:16:31.539356       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1018 09:16:31.539578       1 main.go:148] setting mtu 1500 for CNI 
	I1018 09:16:31.539600       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 09:16:31.539627       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T09:16:31Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 09:16:31.837720       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 09:16:31.839640       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 09:16:31.839673       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 09:16:31.839839       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 09:16:32.251296       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 09:16:32.251331       1 metrics.go:72] Registering metrics
	I1018 09:16:32.251447       1 controller.go:711] "Syncing nftables rules"
	I1018 09:16:41.754506       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 09:16:41.754598       1 main.go:301] handling current node
	I1018 09:16:51.757528       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 09:16:51.757589       1 main.go:301] handling current node
	I1018 09:17:01.754603       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 09:17:01.754642       1 main.go:301] handling current node
	I1018 09:17:11.754136       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 09:17:11.754177       1 main.go:301] handling current node
	I1018 09:17:21.754089       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 09:17:21.754127       1 main.go:301] handling current node
	
	
	==> kube-apiserver [62682de07bbfeb0d0f0c6405121566236410d571314651f369c15f65938b548a] <==
	I1018 09:16:30.746915       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1018 09:16:30.749291       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1018 09:16:30.749371       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1018 09:16:30.749446       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1018 09:16:30.752332       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 09:16:30.752367       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1018 09:16:30.753140       1 policy_source.go:240] refreshing policies
	I1018 09:16:30.753489       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1018 09:16:30.753550       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 09:16:30.753730       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1018 09:16:30.761751       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1018 09:16:30.777627       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 09:16:30.788208       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 09:16:30.810626       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:16:30.997582       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 09:16:31.248216       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 09:16:31.314520       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 09:16:31.348229       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 09:16:31.361875       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 09:16:31.436550       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.56.127"}
	I1018 09:16:31.454383       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.60.206"}
	I1018 09:16:31.643763       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 09:16:34.661050       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 09:16:34.710751       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 09:16:34.810319       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [153dd41ff60f495d247d4bd42054dd9255c2fe5ccbc173f31021152a50b30308] <==
	I1018 09:16:34.207203       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 09:16:34.207211       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 09:16:34.207219       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 09:16:34.207445       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 09:16:34.207703       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 09:16:34.208643       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 09:16:34.208656       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 09:16:34.208675       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 09:16:34.208836       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 09:16:34.209789       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 09:16:34.209814       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 09:16:34.209839       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 09:16:34.213425       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1018 09:16:34.215735       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:16:34.215750       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 09:16:34.216933       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 09:16:34.219125       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1018 09:16:34.222404       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 09:16:34.222526       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 09:16:34.222673       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-031066"
	I1018 09:16:34.222734       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1018 09:16:34.225782       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 09:16:34.228724       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 09:16:34.232083       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 09:16:34.236475       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [8935165d381c76e0adbf1b4796ec6dacb8a681c43afb77e2bac74597041759ac] <==
	I1018 09:16:31.336014       1 server_linux.go:53] "Using iptables proxy"
	I1018 09:16:31.407028       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 09:16:31.507489       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 09:16:31.508172       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1018 09:16:31.508451       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 09:16:31.536271       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 09:16:31.536370       1 server_linux.go:132] "Using iptables Proxier"
	I1018 09:16:31.543686       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 09:16:31.544303       1 server.go:527] "Version info" version="v1.34.1"
	I1018 09:16:31.544440       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:16:31.548173       1 config.go:200] "Starting service config controller"
	I1018 09:16:31.548206       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 09:16:31.548239       1 config.go:106] "Starting endpoint slice config controller"
	I1018 09:16:31.548244       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 09:16:31.548289       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 09:16:31.548299       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 09:16:31.548448       1 config.go:309] "Starting node config controller"
	I1018 09:16:31.548537       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 09:16:31.648833       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 09:16:31.648938       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 09:16:31.648957       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 09:16:31.648969       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [db536597b2746191742cfa1b8df28f2fe3935b9a553d5543f993db2773c9f6a1] <==
	I1018 09:16:28.900193       1 serving.go:386] Generated self-signed cert in-memory
	W1018 09:16:30.687495       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 09:16:30.687530       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 09:16:30.687544       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 09:16:30.687554       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 09:16:30.726825       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 09:16:30.726877       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:16:30.735804       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:16:30.735849       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:16:30.738454       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 09:16:30.738548       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 09:16:30.836301       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 09:16:34 no-preload-031066 kubelet[708]: I1018 09:16:34.917187     708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/7bd3024a-a71b-4103-8169-ebb260c80af3-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-fg4h7\" (UID: \"7bd3024a-a71b-4103-8169-ebb260c80af3\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fg4h7"
	Oct 18 09:16:34 no-preload-031066 kubelet[708]: I1018 09:16:34.917271     708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjxd5\" (UniqueName: \"kubernetes.io/projected/7bd3024a-a71b-4103-8169-ebb260c80af3-kube-api-access-tjxd5\") pod \"dashboard-metrics-scraper-6ffb444bf9-fg4h7\" (UID: \"7bd3024a-a71b-4103-8169-ebb260c80af3\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fg4h7"
	Oct 18 09:16:41 no-preload-031066 kubelet[708]: I1018 09:16:41.099118     708 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 18 09:16:41 no-preload-031066 kubelet[708]: I1018 09:16:41.949048     708 scope.go:117] "RemoveContainer" containerID="b9c64c3a59bf38a4fa0fba6b4129a5dcfe61d4f9bf702b16c7b72d5cb86232f6"
	Oct 18 09:16:41 no-preload-031066 kubelet[708]: I1018 09:16:41.965225     708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-z9ksf" podStartSLOduration=3.780999331 podStartE2EDuration="7.965201277s" podCreationTimestamp="2025-10-18 09:16:34 +0000 UTC" firstStartedPulling="2025-10-18 09:16:35.103089003 +0000 UTC m=+7.358958911" lastFinishedPulling="2025-10-18 09:16:39.28729095 +0000 UTC m=+11.543160857" observedRunningTime="2025-10-18 09:16:39.95502353 +0000 UTC m=+12.210893458" watchObservedRunningTime="2025-10-18 09:16:41.965201277 +0000 UTC m=+14.221071290"
	Oct 18 09:16:42 no-preload-031066 kubelet[708]: I1018 09:16:42.953497     708 scope.go:117] "RemoveContainer" containerID="b9c64c3a59bf38a4fa0fba6b4129a5dcfe61d4f9bf702b16c7b72d5cb86232f6"
	Oct 18 09:16:42 no-preload-031066 kubelet[708]: I1018 09:16:42.953625     708 scope.go:117] "RemoveContainer" containerID="f7ce25988f3e487c4e57709df2b469fcf69805297e28027fb1cd63e4f1ce5b33"
	Oct 18 09:16:42 no-preload-031066 kubelet[708]: E1018 09:16:42.953842     708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fg4h7_kubernetes-dashboard(7bd3024a-a71b-4103-8169-ebb260c80af3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fg4h7" podUID="7bd3024a-a71b-4103-8169-ebb260c80af3"
	Oct 18 09:16:43 no-preload-031066 kubelet[708]: I1018 09:16:43.957284     708 scope.go:117] "RemoveContainer" containerID="f7ce25988f3e487c4e57709df2b469fcf69805297e28027fb1cd63e4f1ce5b33"
	Oct 18 09:16:43 no-preload-031066 kubelet[708]: E1018 09:16:43.957512     708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fg4h7_kubernetes-dashboard(7bd3024a-a71b-4103-8169-ebb260c80af3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fg4h7" podUID="7bd3024a-a71b-4103-8169-ebb260c80af3"
	Oct 18 09:16:50 no-preload-031066 kubelet[708]: I1018 09:16:50.559559     708 scope.go:117] "RemoveContainer" containerID="f7ce25988f3e487c4e57709df2b469fcf69805297e28027fb1cd63e4f1ce5b33"
	Oct 18 09:16:50 no-preload-031066 kubelet[708]: E1018 09:16:50.559818     708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fg4h7_kubernetes-dashboard(7bd3024a-a71b-4103-8169-ebb260c80af3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fg4h7" podUID="7bd3024a-a71b-4103-8169-ebb260c80af3"
	Oct 18 09:17:02 no-preload-031066 kubelet[708]: I1018 09:17:02.000912     708 scope.go:117] "RemoveContainer" containerID="c605782e4c42eb91b018538170fe921def3b1396402bc03eee1dbce4f4af6a69"
	Oct 18 09:17:02 no-preload-031066 kubelet[708]: I1018 09:17:02.846721     708 scope.go:117] "RemoveContainer" containerID="f7ce25988f3e487c4e57709df2b469fcf69805297e28027fb1cd63e4f1ce5b33"
	Oct 18 09:17:03 no-preload-031066 kubelet[708]: I1018 09:17:03.005593     708 scope.go:117] "RemoveContainer" containerID="f7ce25988f3e487c4e57709df2b469fcf69805297e28027fb1cd63e4f1ce5b33"
	Oct 18 09:17:03 no-preload-031066 kubelet[708]: I1018 09:17:03.005831     708 scope.go:117] "RemoveContainer" containerID="b1fe4f4a7ee10d934d38d0876966de29079cb3ec3001c753475493045aa346b0"
	Oct 18 09:17:03 no-preload-031066 kubelet[708]: E1018 09:17:03.006034     708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fg4h7_kubernetes-dashboard(7bd3024a-a71b-4103-8169-ebb260c80af3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fg4h7" podUID="7bd3024a-a71b-4103-8169-ebb260c80af3"
	Oct 18 09:17:10 no-preload-031066 kubelet[708]: I1018 09:17:10.559792     708 scope.go:117] "RemoveContainer" containerID="b1fe4f4a7ee10d934d38d0876966de29079cb3ec3001c753475493045aa346b0"
	Oct 18 09:17:10 no-preload-031066 kubelet[708]: E1018 09:17:10.560069     708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fg4h7_kubernetes-dashboard(7bd3024a-a71b-4103-8169-ebb260c80af3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fg4h7" podUID="7bd3024a-a71b-4103-8169-ebb260c80af3"
	Oct 18 09:17:22 no-preload-031066 kubelet[708]: I1018 09:17:22.845893     708 scope.go:117] "RemoveContainer" containerID="b1fe4f4a7ee10d934d38d0876966de29079cb3ec3001c753475493045aa346b0"
	Oct 18 09:17:22 no-preload-031066 kubelet[708]: E1018 09:17:22.846137     708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fg4h7_kubernetes-dashboard(7bd3024a-a71b-4103-8169-ebb260c80af3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fg4h7" podUID="7bd3024a-a71b-4103-8169-ebb260c80af3"
	Oct 18 09:17:25 no-preload-031066 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 09:17:25 no-preload-031066 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 09:17:25 no-preload-031066 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 18 09:17:25 no-preload-031066 systemd[1]: kubelet.service: Consumed 1.866s CPU time.
	
	
	==> kubernetes-dashboard [e718435c8ea3ef9ed304f9cc405a3feced7a46aa8145c5c913dda9eee2bbfb61] <==
	2025/10/18 09:16:39 Starting overwatch
	2025/10/18 09:16:39 Using namespace: kubernetes-dashboard
	2025/10/18 09:16:39 Using in-cluster config to connect to apiserver
	2025/10/18 09:16:39 Using secret token for csrf signing
	2025/10/18 09:16:39 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 09:16:39 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 09:16:39 Successful initial request to the apiserver, version: v1.34.1
	2025/10/18 09:16:39 Generating JWE encryption key
	2025/10/18 09:16:39 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 09:16:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 09:16:39 Initializing JWE encryption key from synchronized object
	2025/10/18 09:16:39 Creating in-cluster Sidecar client
	2025/10/18 09:16:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 09:16:39 Serving insecurely on HTTP port: 9090
	2025/10/18 09:17:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [75ce77572e8bf3a989ac18d086a9f8cfbaae21b8f7296e1a9244cd17d037a2e0] <==
	W1018 09:17:02.065498       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:05.521707       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:09.782580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:13.381866       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:16.436646       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:19.459680       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:19.470851       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 09:17:19.471035       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 09:17:19.471180       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4d61b21d-ca88-4508-8d32-276d0fdbca79", APIVersion:"v1", ResourceVersion:"637", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-031066_d0cdd423-e8c4-49e0-9a4c-c57eb38b4aab became leader
	I1018 09:17:19.471261       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-031066_d0cdd423-e8c4-49e0-9a4c-c57eb38b4aab!
	W1018 09:17:19.473864       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:19.480452       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 09:17:19.571907       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-031066_d0cdd423-e8c4-49e0-9a4c-c57eb38b4aab!
	W1018 09:17:21.484055       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:21.489152       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:23.492232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:23.496913       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:25.501775       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:25.517015       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:27.531494       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:27.542853       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:29.550634       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:29.558277       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:31.560714       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:31.564560       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [c605782e4c42eb91b018538170fe921def3b1396402bc03eee1dbce4f4af6a69] <==
	I1018 09:16:31.284759       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 09:17:01.289100       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-031066 -n no-preload-031066
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-031066 -n no-preload-031066: exit status 2 (368.840124ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-031066 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (7.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-444637 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-444637 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (299.298084ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:17:38Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-444637 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-444637
helpers_test.go:243: (dbg) docker inspect newest-cni-444637:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "891566b377adcaa9cc2816d33e76914e19937c48e9ad4928e80005a493fd9941",
	        "Created": "2025-10-18T09:17:13.777714578Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 319878,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T09:17:13.821646199Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/891566b377adcaa9cc2816d33e76914e19937c48e9ad4928e80005a493fd9941/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/891566b377adcaa9cc2816d33e76914e19937c48e9ad4928e80005a493fd9941/hostname",
	        "HostsPath": "/var/lib/docker/containers/891566b377adcaa9cc2816d33e76914e19937c48e9ad4928e80005a493fd9941/hosts",
	        "LogPath": "/var/lib/docker/containers/891566b377adcaa9cc2816d33e76914e19937c48e9ad4928e80005a493fd9941/891566b377adcaa9cc2816d33e76914e19937c48e9ad4928e80005a493fd9941-json.log",
	        "Name": "/newest-cni-444637",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-444637:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-444637",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "891566b377adcaa9cc2816d33e76914e19937c48e9ad4928e80005a493fd9941",
	                "LowerDir": "/var/lib/docker/overlay2/299ed2d3b858283b2c8206fda315e99ed5d127ab10dbdaecabdcb4955ace8dbc-init/diff:/var/lib/docker/overlay2/76f783f469ac4c930bc111d7df4bd2b3a57bdcd762971c7ce0ba7a7b959771a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/299ed2d3b858283b2c8206fda315e99ed5d127ab10dbdaecabdcb4955ace8dbc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/299ed2d3b858283b2c8206fda315e99ed5d127ab10dbdaecabdcb4955ace8dbc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/299ed2d3b858283b2c8206fda315e99ed5d127ab10dbdaecabdcb4955ace8dbc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-444637",
	                "Source": "/var/lib/docker/volumes/newest-cni-444637/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-444637",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-444637",
	                "name.minikube.sigs.k8s.io": "newest-cni-444637",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c837e697f103ed7f8b31632dc20661f2cdb41adf4e07575bece0d736da8fba00",
	            "SandboxKey": "/var/run/docker/netns/c837e697f103",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33124"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-444637": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d6:f5:a4:cb:bb:1f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "dd9c4a8b133b5630e17e447400d74046fdd59b021f81b3128919b2fa8ae8dbbe",
	                    "EndpointID": "313a2a688f4a04629a84fb7d01813ea4c4bd7869c38b1f465a99f6b67b24adb7",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-444637",
	                        "891566b377ad"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-444637 -n newest-cni-444637
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-444637 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-444637 logs -n 25: (1.212281189s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p enable-default-cni-448954 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                     │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ ssh     │ -p enable-default-cni-448954 sudo crio config                                                                                                                                                                                                 │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ delete  │ -p enable-default-cni-448954                                                                                                                                                                                                                  │ enable-default-cni-448954    │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ delete  │ -p disable-driver-mounts-634520                                                                                                                                                                                                               │ disable-driver-mounts-634520 │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ start   │ -p default-k8s-diff-port-986220 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-986220 │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:17 UTC │
	│ addons  │ enable dashboard -p no-preload-031066 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-031066            │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ start   │ -p no-preload-031066 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-031066            │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:17 UTC │
	│ addons  │ enable metrics-server -p embed-certs-880603 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-880603           │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │                     │
	│ stop    │ -p embed-certs-880603 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-880603           │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:17 UTC │
	│ image   │ old-k8s-version-951975 image list --format=json                                                                                                                                                                                               │ old-k8s-version-951975       │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ pause   │ -p old-k8s-version-951975 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-951975       │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │                     │
	│ delete  │ -p old-k8s-version-951975                                                                                                                                                                                                                     │ old-k8s-version-951975       │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ delete  │ -p old-k8s-version-951975                                                                                                                                                                                                                     │ old-k8s-version-951975       │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ start   │ -p newest-cni-444637 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-444637            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-986220 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-986220 │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-880603 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-880603           │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ start   │ -p embed-certs-880603 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-880603           │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-986220 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-986220 │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ image   │ no-preload-031066 image list --format=json                                                                                                                                                                                                    │ no-preload-031066            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ pause   │ -p no-preload-031066 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-031066            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-986220 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-986220 │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ start   │ -p default-k8s-diff-port-986220 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-986220 │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │                     │
	│ delete  │ -p no-preload-031066                                                                                                                                                                                                                          │ no-preload-031066            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ delete  │ -p no-preload-031066                                                                                                                                                                                                                          │ no-preload-031066            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ addons  │ enable metrics-server -p newest-cni-444637 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-444637            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 09:17:29
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 09:17:29.203727  324191 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:17:29.204025  324191 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:17:29.204037  324191 out.go:374] Setting ErrFile to fd 2...
	I1018 09:17:29.204044  324191 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:17:29.204391  324191 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-5897/.minikube/bin
	I1018 09:17:29.205672  324191 out.go:368] Setting JSON to false
	I1018 09:17:29.207338  324191 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3597,"bootTime":1760775452,"procs":334,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 09:17:29.208225  324191 start.go:141] virtualization: kvm guest
	I1018 09:17:29.210154  324191 out.go:179] * [default-k8s-diff-port-986220] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 09:17:29.212073  324191 notify.go:220] Checking for updates...
	I1018 09:17:29.212110  324191 out.go:179]   - MINIKUBE_LOCATION=21767
	I1018 09:17:29.213987  324191 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:17:29.215657  324191 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-5897/kubeconfig
	I1018 09:17:29.220405  324191 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-5897/.minikube
	I1018 09:17:29.222335  324191 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 09:17:29.223846  324191 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:17:29.225965  324191 config.go:182] Loaded profile config "default-k8s-diff-port-986220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:17:29.226616  324191 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:17:29.263938  324191 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 09:17:29.264072  324191 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:17:29.360716  324191 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-18 09:17:29.344884819 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:17:29.360863  324191 docker.go:318] overlay module found
	I1018 09:17:29.363402  324191 out.go:179] * Using the docker driver based on existing profile
	I1018 09:17:29.364231  324191 start.go:305] selected driver: docker
	I1018 09:17:29.364250  324191 start.go:925] validating driver "docker" against &{Name:default-k8s-diff-port-986220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-986220 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:17:29.364396  324191 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:17:29.365379  324191 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:17:29.461136  324191 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-18 09:17:29.441495112 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:17:29.461592  324191 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:17:29.461647  324191 cni.go:84] Creating CNI manager for ""
	I1018 09:17:29.461742  324191 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:17:29.461798  324191 start.go:349] cluster config:
	{Name:default-k8s-diff-port-986220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-986220 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:17:29.464356  324191 out.go:179] * Starting "default-k8s-diff-port-986220" primary control-plane node in "default-k8s-diff-port-986220" cluster
	I1018 09:17:29.468800  324191 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 09:17:29.470317  324191 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 09:17:29.473065  324191 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:17:29.473150  324191 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-5897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 09:17:29.473181  324191 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 09:17:29.473186  324191 cache.go:58] Caching tarball of preloaded images
	I1018 09:17:29.473380  324191 preload.go:233] Found /home/jenkins/minikube-integration/21767-5897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 09:17:29.473395  324191 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 09:17:29.473541  324191 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/config.json ...
	I1018 09:17:29.508320  324191 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 09:17:29.508369  324191 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 09:17:29.508390  324191 cache.go:232] Successfully downloaded all kic artifacts
	I1018 09:17:29.508421  324191 start.go:360] acquireMachinesLock for default-k8s-diff-port-986220: {Name:mkb47939fe80a3621f1854992111e30a9beab56d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:17:29.508515  324191 start.go:364] duration metric: took 74.575µs to acquireMachinesLock for "default-k8s-diff-port-986220"
	I1018 09:17:29.508535  324191 start.go:96] Skipping create...Using existing machine configuration
	I1018 09:17:29.508542  324191 fix.go:54] fixHost starting: 
	I1018 09:17:29.508868  324191 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-986220 --format={{.State.Status}}
	I1018 09:17:29.538651  324191 fix.go:112] recreateIfNeeded on default-k8s-diff-port-986220: state=Stopped err=<nil>
	W1018 09:17:29.538691  324191 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 09:17:30.515268  317552 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 5.50251371s
	I1018 09:17:30.564099  317552 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 09:17:30.617250  317552 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 09:17:30.642754  317552 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 09:17:30.643026  317552 kubeadm.go:318] [mark-control-plane] Marking the node newest-cni-444637 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 09:17:30.665495  317552 kubeadm.go:318] [bootstrap-token] Using token: 3sjplx.xtedjpt826zeoemw
	W1018 09:17:28.334569  318609 pod_ready.go:104] pod "coredns-66bc5c9577-7fnw7" is not "Ready", error: <nil>
	W1018 09:17:30.340585  318609 pod_ready.go:104] pod "coredns-66bc5c9577-7fnw7" is not "Ready", error: <nil>
	I1018 09:17:30.672051  317552 out.go:252]   - Configuring RBAC rules ...
	I1018 09:17:30.672201  317552 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 09:17:30.672917  317552 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 09:17:30.680866  317552 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 09:17:30.687247  317552 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 09:17:30.693234  317552 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 09:17:30.696767  317552 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 09:17:30.924864  317552 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 09:17:31.356054  317552 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 09:17:31.922672  317552 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 09:17:31.923800  317552 kubeadm.go:318] 
	I1018 09:17:31.923899  317552 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 09:17:31.923907  317552 kubeadm.go:318] 
	I1018 09:17:31.924014  317552 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 09:17:31.924024  317552 kubeadm.go:318] 
	I1018 09:17:31.924055  317552 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 09:17:31.924143  317552 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 09:17:31.924206  317552 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 09:17:31.924216  317552 kubeadm.go:318] 
	I1018 09:17:31.924316  317552 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 09:17:31.924367  317552 kubeadm.go:318] 
	I1018 09:17:31.924446  317552 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 09:17:31.924457  317552 kubeadm.go:318] 
	I1018 09:17:31.924543  317552 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 09:17:31.924653  317552 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 09:17:31.924754  317552 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 09:17:31.924765  317552 kubeadm.go:318] 
	I1018 09:17:31.924845  317552 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 09:17:31.924962  317552 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 09:17:31.924978  317552 kubeadm.go:318] 
	I1018 09:17:31.925102  317552 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 3sjplx.xtedjpt826zeoemw \
	I1018 09:17:31.925239  317552 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:03f732b5d900f8eb7de41cf71a6356f3c4edf03d7a3795a959179e2391e7734f \
	I1018 09:17:31.925268  317552 kubeadm.go:318] 	--control-plane 
	I1018 09:17:31.925276  317552 kubeadm.go:318] 
	I1018 09:17:31.925406  317552 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 09:17:31.925417  317552 kubeadm.go:318] 
	I1018 09:17:31.925525  317552 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 3sjplx.xtedjpt826zeoemw \
	I1018 09:17:31.925650  317552 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:03f732b5d900f8eb7de41cf71a6356f3c4edf03d7a3795a959179e2391e7734f 
	I1018 09:17:31.929089  317552 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1018 09:17:31.929219  317552 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 09:17:31.929257  317552 cni.go:84] Creating CNI manager for ""
	I1018 09:17:31.929272  317552 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:17:31.930770  317552 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 09:17:31.932182  317552 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 09:17:31.936942  317552 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 09:17:31.936964  317552 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 09:17:31.953998  317552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 09:17:32.220179  317552 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 09:17:32.220261  317552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:17:32.220279  317552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-444637 minikube.k8s.io/updated_at=2025_10_18T09_17_32_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2a39cecdc22b5fb611b15c7501c7459c3b4d2820 minikube.k8s.io/name=newest-cni-444637 minikube.k8s.io/primary=true
	I1018 09:17:32.231714  317552 ops.go:34] apiserver oom_adj: -16
	I1018 09:17:32.318688  317552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:17:32.819600  317552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:17:33.318905  317552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:17:33.819532  317552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:17:29.540172  324191 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-986220" ...
	I1018 09:17:29.540272  324191 cli_runner.go:164] Run: docker start default-k8s-diff-port-986220
	I1018 09:17:29.952100  324191 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-986220 --format={{.State.Status}}
	I1018 09:17:29.980674  324191 kic.go:430] container "default-k8s-diff-port-986220" state is running.
	I1018 09:17:29.981526  324191 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-986220
	I1018 09:17:30.013233  324191 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/config.json ...
	I1018 09:17:30.013923  324191 machine.go:93] provisionDockerMachine start ...
	I1018 09:17:30.013994  324191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-986220
	I1018 09:17:30.043258  324191 main.go:141] libmachine: Using SSH client type: native
	I1018 09:17:30.043770  324191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1018 09:17:30.043822  324191 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:17:30.044847  324191 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57550->127.0.0.1:33128: read: connection reset by peer
	I1018 09:17:33.185985  324191 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-986220
	
	I1018 09:17:33.186016  324191 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-986220"
	I1018 09:17:33.186083  324191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-986220
	I1018 09:17:33.206114  324191 main.go:141] libmachine: Using SSH client type: native
	I1018 09:17:33.206329  324191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1018 09:17:33.206359  324191 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-986220 && echo "default-k8s-diff-port-986220" | sudo tee /etc/hostname
	I1018 09:17:33.353961  324191 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-986220
	
	I1018 09:17:33.354031  324191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-986220
	I1018 09:17:33.376051  324191 main.go:141] libmachine: Using SSH client type: native
	I1018 09:17:33.376391  324191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1018 09:17:33.376439  324191 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-986220' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-986220/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-986220' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:17:33.514289  324191 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:17:33.514323  324191 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-5897/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-5897/.minikube}
	I1018 09:17:33.514379  324191 ubuntu.go:190] setting up certificates
	I1018 09:17:33.514397  324191 provision.go:84] configureAuth start
	I1018 09:17:33.514465  324191 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-986220
	I1018 09:17:33.533928  324191 provision.go:143] copyHostCerts
	I1018 09:17:33.533989  324191 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem, removing ...
	I1018 09:17:33.534006  324191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem
	I1018 09:17:33.534086  324191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem (1078 bytes)
	I1018 09:17:33.534186  324191 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem, removing ...
	I1018 09:17:33.534196  324191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem
	I1018 09:17:33.534224  324191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem (1123 bytes)
	I1018 09:17:33.534278  324191 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem, removing ...
	I1018 09:17:33.534286  324191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem
	I1018 09:17:33.534308  324191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem (1675 bytes)
	I1018 09:17:33.534407  324191 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-986220 san=[127.0.0.1 192.168.94.2 default-k8s-diff-port-986220 localhost minikube]
	I1018 09:17:33.871191  324191 provision.go:177] copyRemoteCerts
	I1018 09:17:33.871268  324191 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:17:33.871315  324191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-986220
	I1018 09:17:33.891742  324191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/default-k8s-diff-port-986220/id_rsa Username:docker}
	I1018 09:17:33.991443  324191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 09:17:34.014654  324191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:17:34.036507  324191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1018 09:17:34.058670  324191 provision.go:87] duration metric: took 544.256394ms to configureAuth
	I1018 09:17:34.058704  324191 ubuntu.go:206] setting minikube options for container-runtime
	I1018 09:17:34.058880  324191 config.go:182] Loaded profile config "default-k8s-diff-port-986220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:17:34.058995  324191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-986220
	I1018 09:17:34.079033  324191 main.go:141] libmachine: Using SSH client type: native
	I1018 09:17:34.079274  324191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1018 09:17:34.079303  324191 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:17:34.586560  324191 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:17:34.586592  324191 machine.go:96] duration metric: took 4.572650458s to provisionDockerMachine
	I1018 09:17:34.586604  324191 start.go:293] postStartSetup for "default-k8s-diff-port-986220" (driver="docker")
	I1018 09:17:34.586617  324191 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:17:34.586689  324191 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:17:34.586731  324191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-986220
	I1018 09:17:34.608860  324191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/default-k8s-diff-port-986220/id_rsa Username:docker}
	I1018 09:17:34.709195  324191 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:17:34.713121  324191 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 09:17:34.713147  324191 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 09:17:34.713159  324191 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-5897/.minikube/addons for local assets ...
	I1018 09:17:34.713215  324191 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-5897/.minikube/files for local assets ...
	I1018 09:17:34.713298  324191 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem -> 93942.pem in /etc/ssl/certs
	I1018 09:17:34.713414  324191 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:17:34.721795  324191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem --> /etc/ssl/certs/93942.pem (1708 bytes)
	I1018 09:17:34.741240  324191 start.go:296] duration metric: took 154.619187ms for postStartSetup
	I1018 09:17:34.741334  324191 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:17:34.741474  324191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-986220
	I1018 09:17:34.762706  324191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/default-k8s-diff-port-986220/id_rsa Username:docker}
	I1018 09:17:34.857991  324191 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 09:17:34.863142  324191 fix.go:56] duration metric: took 5.354595141s for fixHost
	I1018 09:17:34.863170  324191 start.go:83] releasing machines lock for "default-k8s-diff-port-986220", held for 5.354645555s
	I1018 09:17:34.863243  324191 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-986220
	I1018 09:17:34.884822  324191 ssh_runner.go:195] Run: cat /version.json
	I1018 09:17:34.884880  324191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-986220
	I1018 09:17:34.884883  324191 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:17:34.884949  324191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-986220
	I1018 09:17:34.907432  324191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/default-k8s-diff-port-986220/id_rsa Username:docker}
	I1018 09:17:34.907617  324191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/default-k8s-diff-port-986220/id_rsa Username:docker}
	I1018 09:17:35.003979  324191 ssh_runner.go:195] Run: systemctl --version
	I1018 09:17:35.063788  324191 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:17:35.106120  324191 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:17:35.111828  324191 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:17:35.111897  324191 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:17:35.120835  324191 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 09:17:35.120862  324191 start.go:495] detecting cgroup driver to use...
	I1018 09:17:35.120899  324191 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 09:17:35.120945  324191 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:17:35.137721  324191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:17:35.151948  324191 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:17:35.152015  324191 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:17:35.169686  324191 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:17:35.184040  324191 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:17:35.277041  324191 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:17:35.371876  324191 docker.go:234] disabling docker service ...
	I1018 09:17:35.371940  324191 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:17:35.388441  324191 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:17:35.404326  324191 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:17:35.495631  324191 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:17:35.593279  324191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:17:35.608246  324191 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:17:35.625217  324191 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 09:17:35.625267  324191 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:17:35.635864  324191 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 09:17:35.635938  324191 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:17:35.646219  324191 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:17:35.657010  324191 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:17:35.667870  324191 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:17:35.677083  324191 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:17:35.687374  324191 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:17:35.696775  324191 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:17:35.707239  324191 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:17:35.717080  324191 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:17:35.727070  324191 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:17:35.814761  324191 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:17:35.939540  324191 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:17:35.939604  324191 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:17:35.943922  324191 start.go:563] Will wait 60s for crictl version
	I1018 09:17:35.943982  324191 ssh_runner.go:195] Run: which crictl
	I1018 09:17:35.947920  324191 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 09:17:35.973012  324191 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 09:17:35.973103  324191 ssh_runner.go:195] Run: crio --version
	I1018 09:17:36.004163  324191 ssh_runner.go:195] Run: crio --version
	I1018 09:17:36.044565  324191 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1018 09:17:32.833736  318609 pod_ready.go:104] pod "coredns-66bc5c9577-7fnw7" is not "Ready", error: <nil>
	W1018 09:17:34.833970  318609 pod_ready.go:104] pod "coredns-66bc5c9577-7fnw7" is not "Ready", error: <nil>
	I1018 09:17:36.045927  324191 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-986220 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:17:36.065939  324191 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1018 09:17:36.070545  324191 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:17:36.081783  324191 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-986220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-986220 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:17:36.081923  324191 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:17:36.081984  324191 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:17:36.122196  324191 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:17:36.122226  324191 crio.go:433] Images already preloaded, skipping extraction
	I1018 09:17:36.122303  324191 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:17:36.157683  324191 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:17:36.157712  324191 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:17:36.157722  324191 kubeadm.go:934] updating node { 192.168.94.2 8444 v1.34.1 crio true true} ...
	I1018 09:17:36.157841  324191 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-986220 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-986220 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:17:36.157925  324191 ssh_runner.go:195] Run: crio config
	I1018 09:17:36.207679  324191 cni.go:84] Creating CNI manager for ""
	I1018 09:17:36.207710  324191 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:17:36.207722  324191 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 09:17:36.207744  324191 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-986220 NodeName:default-k8s-diff-port-986220 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:17:36.207876  324191 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-986220"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:17:36.207948  324191 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 09:17:36.216761  324191 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:17:36.216842  324191 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:17:36.225770  324191 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1018 09:17:36.240184  324191 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:17:36.254503  324191 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1018 09:17:36.268624  324191 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1018 09:17:36.272735  324191 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:17:36.285816  324191 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:17:36.377105  324191 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:17:36.401412  324191 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220 for IP: 192.168.94.2
	I1018 09:17:36.401439  324191 certs.go:195] generating shared ca certs ...
	I1018 09:17:36.401460  324191 certs.go:227] acquiring lock for ca certs: {Name:mk550b60d986fbbdf7b5e0015c56234b739f3162 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:17:36.401636  324191 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-5897/.minikube/ca.key
	I1018 09:17:36.401706  324191 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.key
	I1018 09:17:36.401731  324191 certs.go:257] generating profile certs ...
	I1018 09:17:36.401845  324191 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/client.key
	I1018 09:17:36.401939  324191 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/apiserver.key.6dd2aec8
	I1018 09:17:36.401993  324191 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/proxy-client.key
	I1018 09:17:36.402152  324191 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/9394.pem (1338 bytes)
	W1018 09:17:36.402192  324191 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-5897/.minikube/certs/9394_empty.pem, impossibly tiny 0 bytes
	I1018 09:17:36.402204  324191 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 09:17:36.402248  324191 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:17:36.402283  324191 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:17:36.402310  324191 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem (1675 bytes)
	I1018 09:17:36.402384  324191 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem (1708 bytes)
	I1018 09:17:36.403247  324191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:17:36.423386  324191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 09:17:36.444950  324191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:17:36.467776  324191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 09:17:36.493994  324191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1018 09:17:36.514999  324191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 09:17:36.534054  324191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:17:36.554212  324191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/default-k8s-diff-port-986220/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 09:17:36.573421  324191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem --> /usr/share/ca-certificates/93942.pem (1708 bytes)
	I1018 09:17:36.592740  324191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:17:36.614499  324191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/certs/9394.pem --> /usr/share/ca-certificates/9394.pem (1338 bytes)
	I1018 09:17:36.633615  324191 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:17:36.647706  324191 ssh_runner.go:195] Run: openssl version
	I1018 09:17:36.654663  324191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93942.pem && ln -fs /usr/share/ca-certificates/93942.pem /etc/ssl/certs/93942.pem"
	I1018 09:17:36.664416  324191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93942.pem
	I1018 09:17:36.668698  324191 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 08:35 /usr/share/ca-certificates/93942.pem
	I1018 09:17:36.668762  324191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93942.pem
	I1018 09:17:36.706674  324191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93942.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:17:36.717581  324191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:17:36.728053  324191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:17:36.732957  324191 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:17:36.733025  324191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:17:36.772928  324191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:17:36.782128  324191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9394.pem && ln -fs /usr/share/ca-certificates/9394.pem /etc/ssl/certs/9394.pem"
	I1018 09:17:36.791835  324191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9394.pem
	I1018 09:17:36.796771  324191 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 08:35 /usr/share/ca-certificates/9394.pem
	I1018 09:17:36.796831  324191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9394.pem
	I1018 09:17:36.836556  324191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9394.pem /etc/ssl/certs/51391683.0"
	I1018 09:17:36.846478  324191 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:17:36.851578  324191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 09:17:36.893673  324191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 09:17:36.939688  324191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 09:17:36.993508  324191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 09:17:37.048902  324191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 09:17:37.105131  324191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 09:17:37.144665  324191 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-986220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-986220 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:17:37.144768  324191 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:17:37.144829  324191 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:17:37.198688  324191 cri.go:89] found id: "8956123c1313708cc585f6ee981938531d1fde0ef837a5cdbf5b02ab1fb0c549"
	I1018 09:17:37.198712  324191 cri.go:89] found id: "8d1ab9fe3eb84ef483a99bbfe79d01dfa34dfdff518ca313e3c2299c6723b35e"
	I1018 09:17:37.198718  324191 cri.go:89] found id: "1dc67601595acad3b95b404bf690768d89426dc4a4256db06ee931235af514af"
	I1018 09:17:37.198723  324191 cri.go:89] found id: "bad27ff83c63687be534ccd3f079002f13a4d8cf081095fd1e212a53f3010fbf"
	I1018 09:17:37.198729  324191 cri.go:89] found id: ""
	I1018 09:17:37.198783  324191 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 09:17:37.224482  324191 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:17:37Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:17:37.224563  324191 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:17:37.233826  324191 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 09:17:37.233848  324191 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 09:17:37.233903  324191 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 09:17:37.242452  324191 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 09:17:37.243177  324191 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-986220" does not appear in /home/jenkins/minikube-integration/21767-5897/kubeconfig
	I1018 09:17:37.243641  324191 kubeconfig.go:62] /home/jenkins/minikube-integration/21767-5897/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-986220" cluster setting kubeconfig missing "default-k8s-diff-port-986220" context setting]
	I1018 09:17:37.244324  324191 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/kubeconfig: {Name:mkbdf22e0f6c3f9f36e4cde3352b620d43ace448 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:17:37.245810  324191 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 09:17:37.256013  324191 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.94.2
	I1018 09:17:37.256053  324191 kubeadm.go:601] duration metric: took 22.198789ms to restartPrimaryControlPlane
	I1018 09:17:37.256065  324191 kubeadm.go:402] duration metric: took 111.416461ms to StartCluster
	I1018 09:17:37.256085  324191 settings.go:142] acquiring lock: {Name:mk177870d6cf7000f95346d8b9c104ade730278a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:17:37.256159  324191 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-5897/kubeconfig
	I1018 09:17:37.257926  324191 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/kubeconfig: {Name:mkbdf22e0f6c3f9f36e4cde3352b620d43ace448 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:17:37.258212  324191 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:17:37.258317  324191 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 09:17:37.258438  324191 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-986220"
	I1018 09:17:37.258431  324191 config.go:182] Loaded profile config "default-k8s-diff-port-986220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:17:37.258466  324191 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-986220"
	W1018 09:17:37.258476  324191 addons.go:247] addon storage-provisioner should already be in state true
	I1018 09:17:37.258482  324191 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-986220"
	I1018 09:17:37.258506  324191 host.go:66] Checking if "default-k8s-diff-port-986220" exists ...
	I1018 09:17:37.258512  324191 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-986220"
	I1018 09:17:37.258507  324191 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-986220"
	W1018 09:17:37.258522  324191 addons.go:247] addon dashboard should already be in state true
	I1018 09:17:37.258533  324191 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-986220"
	I1018 09:17:37.258556  324191 host.go:66] Checking if "default-k8s-diff-port-986220" exists ...
	I1018 09:17:37.258865  324191 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-986220 --format={{.State.Status}}
	I1018 09:17:37.259004  324191 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-986220 --format={{.State.Status}}
	I1018 09:17:37.259025  324191 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-986220 --format={{.State.Status}}
	I1018 09:17:37.260113  324191 out.go:179] * Verifying Kubernetes components...
	I1018 09:17:37.261626  324191 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:17:37.286269  324191 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-986220"
	W1018 09:17:37.286296  324191 addons.go:247] addon default-storageclass should already be in state true
	I1018 09:17:37.286331  324191 host.go:66] Checking if "default-k8s-diff-port-986220" exists ...
	I1018 09:17:37.286858  324191 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-986220 --format={{.State.Status}}
	I1018 09:17:37.287792  324191 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:17:37.288550  324191 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1018 09:17:37.289322  324191 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:17:37.289348  324191 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 09:17:37.289412  324191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-986220
	I1018 09:17:37.291651  324191 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1018 09:17:34.319514  317552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:17:34.819359  317552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:17:35.319701  317552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:17:35.818799  317552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:17:36.319567  317552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:17:36.819591  317552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:17:37.320244  317552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:17:37.441450  317552 kubeadm.go:1113] duration metric: took 5.22123918s to wait for elevateKubeSystemPrivileges
	I1018 09:17:37.441486  317552 kubeadm.go:402] duration metric: took 17.32244336s to StartCluster
	I1018 09:17:37.441517  317552 settings.go:142] acquiring lock: {Name:mk177870d6cf7000f95346d8b9c104ade730278a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:17:37.441581  317552 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-5897/kubeconfig
	I1018 09:17:37.443905  317552 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/kubeconfig: {Name:mkbdf22e0f6c3f9f36e4cde3352b620d43ace448 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:17:37.444230  317552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 09:17:37.444241  317552 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:17:37.444338  317552 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 09:17:37.444445  317552 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-444637"
	I1018 09:17:37.444467  317552 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-444637"
	I1018 09:17:37.444500  317552 host.go:66] Checking if "newest-cni-444637" exists ...
	I1018 09:17:37.444523  317552 addons.go:69] Setting default-storageclass=true in profile "newest-cni-444637"
	I1018 09:17:37.444548  317552 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-444637"
	I1018 09:17:37.444600  317552 config.go:182] Loaded profile config "newest-cni-444637": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:17:37.444902  317552 cli_runner.go:164] Run: docker container inspect newest-cni-444637 --format={{.State.Status}}
	I1018 09:17:37.445059  317552 cli_runner.go:164] Run: docker container inspect newest-cni-444637 --format={{.State.Status}}
	I1018 09:17:37.445994  317552 out.go:179] * Verifying Kubernetes components...
	I1018 09:17:37.449448  317552 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:17:37.475568  317552 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:17:37.477110  317552 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:17:37.477133  317552 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 09:17:37.477207  317552 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:17:37.486694  317552 addons.go:238] Setting addon default-storageclass=true in "newest-cni-444637"
	I1018 09:17:37.486747  317552 host.go:66] Checking if "newest-cni-444637" exists ...
	I1018 09:17:37.487213  317552 cli_runner.go:164] Run: docker container inspect newest-cni-444637 --format={{.State.Status}}
	I1018 09:17:37.524795  317552 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/newest-cni-444637/id_rsa Username:docker}
	I1018 09:17:37.532535  317552 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 09:17:37.532557  317552 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 09:17:37.532617  317552 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:17:37.569569  317552 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/newest-cni-444637/id_rsa Username:docker}
	I1018 09:17:37.612831  317552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 09:17:37.657251  317552 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:17:37.681226  317552 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:17:37.721677  317552 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 09:17:37.860861  317552 start.go:976] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1018 09:17:37.862748  317552 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:17:37.862806  317552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:17:38.119477  317552 api_server.go:72] duration metric: took 675.193903ms to wait for apiserver process to appear ...
	I1018 09:17:38.119506  317552 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:17:38.119527  317552 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1018 09:17:38.126669  317552 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1018 09:17:38.127871  317552 api_server.go:141] control plane version: v1.34.1
	I1018 09:17:38.127899  317552 api_server.go:131] duration metric: took 8.384627ms to wait for apiserver health ...
	I1018 09:17:38.127909  317552 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:17:38.129220  317552 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1018 09:17:38.131297  317552 system_pods.go:59] 8 kube-system pods found
	I1018 09:17:38.130548  317552 addons.go:514] duration metric: took 686.212609ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1018 09:17:38.131388  317552 system_pods.go:61] "coredns-66bc5c9577-gc5dd" [7fab8a8d-bdb4-47d4-bf7d-d03341018666] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 09:17:38.131400  317552 system_pods.go:61] "etcd-newest-cni-444637" [b54d61ad-b52d-4343-ba3a-a64b03934319] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 09:17:38.131414  317552 system_pods.go:61] "kindnet-qmlcq" [2c82849a-5511-43a1-a300-a7f46df288ec] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1018 09:17:38.131432  317552 system_pods.go:61] "kube-apiserver-newest-cni-444637" [a9136c1f-8962-45f7-b005-05bd3f856403] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 09:17:38.131439  317552 system_pods.go:61] "kube-controller-manager-newest-cni-444637" [b8d840d7-04c3-495c-aafa-cc8a06e58f06] Running
	I1018 09:17:38.131449  317552 system_pods.go:61] "kube-proxy-hbkn5" [d70417da-43f2-4d8c-a088-07cea5225c34] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1018 09:17:38.131457  317552 system_pods.go:61] "kube-scheduler-newest-cni-444637" [175527c5-4260-4e39-be83-4c36417f3cbe] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 09:17:38.131464  317552 system_pods.go:61] "storage-provisioner" [b0974a78-b6ad-45c3-8241-86f8bb7bc65b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 09:17:38.131475  317552 system_pods.go:74] duration metric: took 3.557693ms to wait for pod list to return data ...
	I1018 09:17:38.131486  317552 default_sa.go:34] waiting for default service account to be created ...
	I1018 09:17:38.134292  317552 default_sa.go:45] found service account: "default"
	I1018 09:17:38.134317  317552 default_sa.go:55] duration metric: took 2.823084ms for default service account to be created ...
	I1018 09:17:38.134332  317552 kubeadm.go:586] duration metric: took 690.052898ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 09:17:38.134382  317552 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:17:38.137331  317552 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 09:17:38.137385  317552 node_conditions.go:123] node cpu capacity is 8
	I1018 09:17:38.137402  317552 node_conditions.go:105] duration metric: took 3.014772ms to run NodePressure ...
	I1018 09:17:38.137416  317552 start.go:241] waiting for startup goroutines ...
	I1018 09:17:38.366373  317552 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-444637" context rescaled to 1 replicas
	I1018 09:17:38.366417  317552 start.go:246] waiting for cluster config update ...
	I1018 09:17:38.366433  317552 start.go:255] writing updated cluster config ...
	I1018 09:17:38.366793  317552 ssh_runner.go:195] Run: rm -f paused
	I1018 09:17:38.434848  317552 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 09:17:38.436935  317552 out.go:179] * Done! kubectl is now configured to use "newest-cni-444637" cluster and "default" namespace by default
	I1018 09:17:37.293205  324191 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1018 09:17:37.293245  324191 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1018 09:17:37.293312  324191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-986220
	I1018 09:17:37.324438  324191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/default-k8s-diff-port-986220/id_rsa Username:docker}
	I1018 09:17:37.333251  324191 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 09:17:37.333281  324191 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 09:17:37.333352  324191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-986220
	I1018 09:17:37.346365  324191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/default-k8s-diff-port-986220/id_rsa Username:docker}
	I1018 09:17:37.372469  324191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/default-k8s-diff-port-986220/id_rsa Username:docker}
	I1018 09:17:37.462098  324191 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:17:37.492453  324191 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-986220" to be "Ready" ...
	I1018 09:17:37.496771  324191 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:17:37.498535  324191 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1018 09:17:37.498564  324191 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1018 09:17:37.538950  324191 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 09:17:37.548924  324191 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1018 09:17:37.548955  324191 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1018 09:17:37.584538  324191 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1018 09:17:37.584572  324191 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1018 09:17:37.630952  324191 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1018 09:17:37.630978  324191 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1018 09:17:37.653547  324191 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1018 09:17:37.653576  324191 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1018 09:17:37.678743  324191 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1018 09:17:37.678773  324191 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1018 09:17:37.699584  324191 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1018 09:17:37.699667  324191 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1018 09:17:37.722939  324191 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1018 09:17:37.722965  324191 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1018 09:17:37.744685  324191 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 09:17:37.744786  324191 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1018 09:17:37.768461  324191 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 09:17:39.122191  324191 node_ready.go:49] node "default-k8s-diff-port-986220" is "Ready"
	I1018 09:17:39.122222  324191 node_ready.go:38] duration metric: took 1.629722799s for node "default-k8s-diff-port-986220" to be "Ready" ...
	I1018 09:17:39.122236  324191 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:17:39.122288  324191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	
	
	==> CRI-O <==
	Oct 18 09:17:37 newest-cni-444637 crio[782]: time="2025-10-18T09:17:37.934745239Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:17:37 newest-cni-444637 crio[782]: time="2025-10-18T09:17:37.93939091Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=db4fba7d-1376-401c-8cb6-3161a939bcc5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:17:37 newest-cni-444637 crio[782]: time="2025-10-18T09:17:37.940310804Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=8105cef9-4252-48e2-b0a7-d0c2919c9072 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:17:37 newest-cni-444637 crio[782]: time="2025-10-18T09:17:37.94125827Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 18 09:17:37 newest-cni-444637 crio[782]: time="2025-10-18T09:17:37.94217761Z" level=info msg="Ran pod sandbox 762bb7c43b5c439868c31c9510fe49e5f673fe51f11eca40d92c7b8535f09323 with infra container: kube-system/kindnet-qmlcq/POD" id=db4fba7d-1376-401c-8cb6-3161a939bcc5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:17:37 newest-cni-444637 crio[782]: time="2025-10-18T09:17:37.942927173Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 18 09:17:37 newest-cni-444637 crio[782]: time="2025-10-18T09:17:37.94411205Z" level=info msg="Ran pod sandbox 84bf80e60047adbf259f46b42869ef3e96b98cd921d411a93f317eea3254ef60 with infra container: kube-system/kube-proxy-hbkn5/POD" id=8105cef9-4252-48e2-b0a7-d0c2919c9072 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:17:37 newest-cni-444637 crio[782]: time="2025-10-18T09:17:37.944526477Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=1d666204-a7c1-4c13-bdd1-b4fb4cb1ea6e name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:17:37 newest-cni-444637 crio[782]: time="2025-10-18T09:17:37.945713386Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=35357b4a-8cf3-4ffb-82c8-03b8e68d3b1a name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:17:37 newest-cni-444637 crio[782]: time="2025-10-18T09:17:37.946014253Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=ce3e2172-1bd6-4889-ab56-57e6eec0ceae name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:17:37 newest-cni-444637 crio[782]: time="2025-10-18T09:17:37.948494578Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=523a0da7-a3a7-488d-b3c1-4417e6d8c5e8 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:17:37 newest-cni-444637 crio[782]: time="2025-10-18T09:17:37.952472929Z" level=info msg="Creating container: kube-system/kindnet-qmlcq/kindnet-cni" id=82b5ab18-3857-480e-9067-2a1c004c262a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:17:37 newest-cni-444637 crio[782]: time="2025-10-18T09:17:37.954653816Z" level=info msg="Creating container: kube-system/kube-proxy-hbkn5/kube-proxy" id=99d6ac08-db55-48a2-a33c-ea9f4a125ed0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:17:37 newest-cni-444637 crio[782]: time="2025-10-18T09:17:37.955145665Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:17:37 newest-cni-444637 crio[782]: time="2025-10-18T09:17:37.955804335Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:17:37 newest-cni-444637 crio[782]: time="2025-10-18T09:17:37.963518971Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:17:37 newest-cni-444637 crio[782]: time="2025-10-18T09:17:37.964279763Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:17:37 newest-cni-444637 crio[782]: time="2025-10-18T09:17:37.964321073Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:17:37 newest-cni-444637 crio[782]: time="2025-10-18T09:17:37.964979594Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:17:37 newest-cni-444637 crio[782]: time="2025-10-18T09:17:37.997574583Z" level=info msg="Created container 4c772441e8328f031fa48f4119f4225af75421b39b164e04386462f8de9a47fc: kube-system/kindnet-qmlcq/kindnet-cni" id=82b5ab18-3857-480e-9067-2a1c004c262a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:17:37 newest-cni-444637 crio[782]: time="2025-10-18T09:17:37.998762295Z" level=info msg="Starting container: 4c772441e8328f031fa48f4119f4225af75421b39b164e04386462f8de9a47fc" id=68232a88-8ed5-4802-9dce-c19d3aaee7f7 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:17:38 newest-cni-444637 crio[782]: time="2025-10-18T09:17:38.001776479Z" level=info msg="Started container" PID=1642 containerID=4c772441e8328f031fa48f4119f4225af75421b39b164e04386462f8de9a47fc description=kube-system/kindnet-qmlcq/kindnet-cni id=68232a88-8ed5-4802-9dce-c19d3aaee7f7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=762bb7c43b5c439868c31c9510fe49e5f673fe51f11eca40d92c7b8535f09323
	Oct 18 09:17:38 newest-cni-444637 crio[782]: time="2025-10-18T09:17:38.00676437Z" level=info msg="Created container 71aaf1f6fa461d0b174bd6d4a9a098968369ad507c13d889c3bacc71c5040131: kube-system/kube-proxy-hbkn5/kube-proxy" id=99d6ac08-db55-48a2-a33c-ea9f4a125ed0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:17:38 newest-cni-444637 crio[782]: time="2025-10-18T09:17:38.007772027Z" level=info msg="Starting container: 71aaf1f6fa461d0b174bd6d4a9a098968369ad507c13d889c3bacc71c5040131" id=e274e54e-1c8f-4d0a-b618-ca737fd4cbe5 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:17:38 newest-cni-444637 crio[782]: time="2025-10-18T09:17:38.01204463Z" level=info msg="Started container" PID=1643 containerID=71aaf1f6fa461d0b174bd6d4a9a098968369ad507c13d889c3bacc71c5040131 description=kube-system/kube-proxy-hbkn5/kube-proxy id=e274e54e-1c8f-4d0a-b618-ca737fd4cbe5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=84bf80e60047adbf259f46b42869ef3e96b98cd921d411a93f317eea3254ef60
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	71aaf1f6fa461       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   2 seconds ago       Running             kube-proxy                0                   84bf80e60047a       kube-proxy-hbkn5                            kube-system
	4c772441e8328       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   2 seconds ago       Running             kindnet-cni               0                   762bb7c43b5c4       kindnet-qmlcq                               kube-system
	7e2f6296fa7ed       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   14 seconds ago      Running             kube-apiserver            0                   b90c335f66ec8       kube-apiserver-newest-cni-444637            kube-system
	5897fcbb0bcc2       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   14 seconds ago      Running             kube-controller-manager   0                   53c958a9e477d       kube-controller-manager-newest-cni-444637   kube-system
	d020bd723ca43       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   14 seconds ago      Running             etcd                      0                   4f6e08992a93a       etcd-newest-cni-444637                      kube-system
	f6dd3bed3ee70       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   14 seconds ago      Running             kube-scheduler            0                   4727427e7bd4c       kube-scheduler-newest-cni-444637            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-444637
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-444637
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a39cecdc22b5fb611b15c7501c7459c3b4d2820
	                    minikube.k8s.io/name=newest-cni-444637
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T09_17_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 09:17:28 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-444637
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 09:17:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 09:17:31 +0000   Sat, 18 Oct 2025 09:17:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 09:17:31 +0000   Sat, 18 Oct 2025 09:17:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 09:17:31 +0000   Sat, 18 Oct 2025 09:17:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 18 Oct 2025 09:17:31 +0000   Sat, 18 Oct 2025 09:17:25 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-444637
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                c20f4ce8-6abc-49e6-9924-f27306703b2d
	  Boot ID:                    e8d7ef1f-87bb-488c-8381-e18fe85b484f
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-444637                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11s
	  kube-system                 kindnet-qmlcq                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4s
	  kube-system                 kube-apiserver-newest-cni-444637             250m (3%)     0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-controller-manager-newest-cni-444637    200m (2%)     0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 kube-proxy-hbkn5                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4s
	  kube-system                 kube-scheduler-newest-cni-444637             100m (1%)     0 (0%)      0 (0%)           0 (0%)         9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 1s                 kube-proxy       
	  Normal  Starting                 16s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15s (x8 over 15s)  kubelet          Node newest-cni-444637 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15s (x8 over 15s)  kubelet          Node newest-cni-444637 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15s (x8 over 15s)  kubelet          Node newest-cni-444637 status is now: NodeHasSufficientPID
	  Normal  Starting                 9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9s                 kubelet          Node newest-cni-444637 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s                 kubelet          Node newest-cni-444637 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s                 kubelet          Node newest-cni-444637 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4s                 node-controller  Node newest-cni-444637 event: Registered Node newest-cni-444637 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3e 73 e0 1e 4f 5c 08 06
	[  +0.001176] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9e 01 6a be c1 ed 08 06
	[  +1.096145] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 92 07 d0 c5 bc 08 06
	[  +0.000393] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff de 8d 0a a3 cc 78 08 06
	[ +17.591772] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 8a 16 36 e8 43 c0 08 06
	[  +0.000413] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3e 73 e0 1e 4f 5c 08 06
	[ +11.820741] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 36 5b 8e 46 ea 47 08 06
	[Oct18 09:15] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 28 86 11 d4 e9 08 06
	[  +0.032974] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 76 2d 83 26 2e 28 08 06
	[  +4.435535] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 22 e2 07 5a 3b 4a 08 06
	[  +0.000382] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 36 5b 8e 46 ea 47 08 06
	[ +43.809014] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 22 6f 4b 2b 7f 46 08 06
	[  +0.000367] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 62 28 86 11 d4 e9 08 06
	
	
	==> etcd [d020bd723ca430c5565f0fbe220c3a737614a3ba8d128ecc594cd933695215c2] <==
	{"level":"warn","ts":"2025-10-18T09:17:27.167201Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:27.178505Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:27.188666Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:27.197820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:27.209673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:27.225477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:27.233272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:27.243388Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:27.259489Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:27.268934Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:27.278632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:27.287923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:27.297760Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:27.306759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:27.315690Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:27.322925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:27.335011Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:27.343545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:27.352762Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:27.364779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:27.377235Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:27.391871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:27.404727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:27.484605Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59554","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T09:17:28.833677Z","caller":"traceutil/trace.go:172","msg":"trace[2126286374] transaction","detail":"{read_only:false; response_revision:59; number_of_response:1; }","duration":"114.00266ms","start":"2025-10-18T09:17:28.719644Z","end":"2025-10-18T09:17:28.833647Z","steps":["trace[2126286374] 'process raft request'  (duration: 81.453325ms)","trace[2126286374] 'compare'  (duration: 32.312515ms)"],"step_count":2}
	
	
	==> kernel <==
	 09:17:40 up  1:00,  0 user,  load average: 5.24, 3.86, 2.55
	Linux newest-cni-444637 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4c772441e8328f031fa48f4119f4225af75421b39b164e04386462f8de9a47fc] <==
	I1018 09:17:38.214032       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 09:17:38.294387       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1018 09:17:38.294590       1 main.go:148] setting mtu 1500 for CNI 
	I1018 09:17:38.294612       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 09:17:38.294644       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T09:17:38Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 09:17:38.505919       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 09:17:38.505948       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 09:17:38.505960       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 09:17:38.795319       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 09:17:39.095544       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 09:17:39.095584       1 metrics.go:72] Registering metrics
	I1018 09:17:39.095680       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [7e2f6296fa7edc0490b5ff81524ace20b7c191883906970f3db3628b4e840b56] <==
	I1018 09:17:28.199966       1 policy_source.go:240] refreshing policies
	I1018 09:17:28.276456       1 controller.go:667] quota admission added evaluator for: namespaces
	E1018 09:17:28.289975       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1018 09:17:28.311882       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1018 09:17:28.312150       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:17:28.316983       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 09:17:28.369128       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:17:28.369453       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 09:17:29.084910       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1018 09:17:29.095832       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1018 09:17:29.095860       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 09:17:30.070940       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 09:17:30.146011       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 09:17:30.291319       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1018 09:17:30.300467       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1018 09:17:30.302168       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 09:17:30.309561       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 09:17:31.105776       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 09:17:31.338448       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 09:17:31.354942       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1018 09:17:31.368077       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 09:17:36.107375       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1018 09:17:36.707028       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 09:17:36.758255       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:17:36.762600       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [5897fcbb0bcc2d9719b847a16a6b49b71d85ee4aece8627ca75d673b73fa2fdf] <==
	I1018 09:17:36.103473       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1018 09:17:36.103500       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 09:17:36.104718       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 09:17:36.104764       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 09:17:36.104803       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1018 09:17:36.104819       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 09:17:36.104804       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 09:17:36.104918       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 09:17:36.105076       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 09:17:36.105154       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 09:17:36.105595       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 09:17:36.106133       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 09:17:36.107613       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 09:17:36.107629       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 09:17:36.107637       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 09:17:36.107874       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:17:36.109780       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1018 09:17:36.109807       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 09:17:36.109780       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 09:17:36.109884       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 09:17:36.109936       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 09:17:36.109942       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 09:17:36.109948       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 09:17:36.121281       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="newest-cni-444637" podCIDRs=["10.42.0.0/24"]
	I1018 09:17:36.124571       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [71aaf1f6fa461d0b174bd6d4a9a098968369ad507c13d889c3bacc71c5040131] <==
	I1018 09:17:38.066960       1 server_linux.go:53] "Using iptables proxy"
	I1018 09:17:38.136950       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 09:17:38.238224       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 09:17:38.238269       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1018 09:17:38.238402       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 09:17:38.263808       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 09:17:38.263888       1 server_linux.go:132] "Using iptables Proxier"
	I1018 09:17:38.271763       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 09:17:38.272211       1 server.go:527] "Version info" version="v1.34.1"
	I1018 09:17:38.272235       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:17:38.275010       1 config.go:200] "Starting service config controller"
	I1018 09:17:38.275036       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 09:17:38.275095       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 09:17:38.275128       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 09:17:38.275118       1 config.go:106] "Starting endpoint slice config controller"
	I1018 09:17:38.275138       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 09:17:38.275196       1 config.go:309] "Starting node config controller"
	I1018 09:17:38.275202       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 09:17:38.275209       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 09:17:38.375932       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 09:17:38.376033       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 09:17:38.376061       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [f6dd3bed3ee70f583c29c33708155979f37b2754c0f719aed3c368fcc50cd3d2] <==
	E1018 09:17:28.193553       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 09:17:28.193635       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 09:17:28.193704       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 09:17:28.193786       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 09:17:28.194198       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 09:17:28.194299       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 09:17:29.015948       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 09:17:29.018556       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 09:17:29.030462       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 09:17:29.157609       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 09:17:29.168851       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 09:17:29.211735       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 09:17:29.221266       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 09:17:29.223545       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 09:17:29.257814       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 09:17:29.356943       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 09:17:29.379631       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 09:17:29.425757       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 09:17:29.605727       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 09:17:29.606544       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1018 09:17:29.629564       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 09:17:29.630666       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 09:17:29.646542       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 09:17:29.673255       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1018 09:17:31.987459       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 09:17:32 newest-cni-444637 kubelet[1344]: I1018 09:17:32.360588    1344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-444637" podStartSLOduration=3.3605608399999998 podStartE2EDuration="3.36056084s" podCreationTimestamp="2025-10-18 09:17:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:17:32.360125552 +0000 UTC m=+1.226193044" watchObservedRunningTime="2025-10-18 09:17:32.36056084 +0000 UTC m=+1.226628331"
	Oct 18 09:17:32 newest-cni-444637 kubelet[1344]: I1018 09:17:32.360734    1344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-444637" podStartSLOduration=1.360725631 podStartE2EDuration="1.360725631s" podCreationTimestamp="2025-10-18 09:17:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:17:32.349031258 +0000 UTC m=+1.215098749" watchObservedRunningTime="2025-10-18 09:17:32.360725631 +0000 UTC m=+1.226793123"
	Oct 18 09:17:36 newest-cni-444637 kubelet[1344]: I1018 09:17:36.130845    1344 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 18 09:17:36 newest-cni-444637 kubelet[1344]: I1018 09:17:36.131619    1344 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 18 09:17:36 newest-cni-444637 kubelet[1344]: I1018 09:17:36.172623    1344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d70417da-43f2-4d8c-a088-07cea5225c34-lib-modules\") pod \"kube-proxy-hbkn5\" (UID: \"d70417da-43f2-4d8c-a088-07cea5225c34\") " pod="kube-system/kube-proxy-hbkn5"
	Oct 18 09:17:36 newest-cni-444637 kubelet[1344]: I1018 09:17:36.172724    1344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/2c82849a-5511-43a1-a300-a7f46df288ec-cni-cfg\") pod \"kindnet-qmlcq\" (UID: \"2c82849a-5511-43a1-a300-a7f46df288ec\") " pod="kube-system/kindnet-qmlcq"
	Oct 18 09:17:36 newest-cni-444637 kubelet[1344]: I1018 09:17:36.172761    1344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2c82849a-5511-43a1-a300-a7f46df288ec-lib-modules\") pod \"kindnet-qmlcq\" (UID: \"2c82849a-5511-43a1-a300-a7f46df288ec\") " pod="kube-system/kindnet-qmlcq"
	Oct 18 09:17:36 newest-cni-444637 kubelet[1344]: I1018 09:17:36.172792    1344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdk97\" (UniqueName: \"kubernetes.io/projected/2c82849a-5511-43a1-a300-a7f46df288ec-kube-api-access-kdk97\") pod \"kindnet-qmlcq\" (UID: \"2c82849a-5511-43a1-a300-a7f46df288ec\") " pod="kube-system/kindnet-qmlcq"
	Oct 18 09:17:36 newest-cni-444637 kubelet[1344]: I1018 09:17:36.173162    1344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d70417da-43f2-4d8c-a088-07cea5225c34-kube-proxy\") pod \"kube-proxy-hbkn5\" (UID: \"d70417da-43f2-4d8c-a088-07cea5225c34\") " pod="kube-system/kube-proxy-hbkn5"
	Oct 18 09:17:36 newest-cni-444637 kubelet[1344]: I1018 09:17:36.173238    1344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d70417da-43f2-4d8c-a088-07cea5225c34-xtables-lock\") pod \"kube-proxy-hbkn5\" (UID: \"d70417da-43f2-4d8c-a088-07cea5225c34\") " pod="kube-system/kube-proxy-hbkn5"
	Oct 18 09:17:36 newest-cni-444637 kubelet[1344]: I1018 09:17:36.173266    1344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pflrm\" (UniqueName: \"kubernetes.io/projected/d70417da-43f2-4d8c-a088-07cea5225c34-kube-api-access-pflrm\") pod \"kube-proxy-hbkn5\" (UID: \"d70417da-43f2-4d8c-a088-07cea5225c34\") " pod="kube-system/kube-proxy-hbkn5"
	Oct 18 09:17:36 newest-cni-444637 kubelet[1344]: I1018 09:17:36.173309    1344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2c82849a-5511-43a1-a300-a7f46df288ec-xtables-lock\") pod \"kindnet-qmlcq\" (UID: \"2c82849a-5511-43a1-a300-a7f46df288ec\") " pod="kube-system/kindnet-qmlcq"
	Oct 18 09:17:36 newest-cni-444637 kubelet[1344]: E1018 09:17:36.280701    1344 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 18 09:17:36 newest-cni-444637 kubelet[1344]: E1018 09:17:36.280750    1344 projected.go:196] Error preparing data for projected volume kube-api-access-kdk97 for pod kube-system/kindnet-qmlcq: configmap "kube-root-ca.crt" not found
	Oct 18 09:17:36 newest-cni-444637 kubelet[1344]: E1018 09:17:36.280851    1344 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2c82849a-5511-43a1-a300-a7f46df288ec-kube-api-access-kdk97 podName:2c82849a-5511-43a1-a300-a7f46df288ec nodeName:}" failed. No retries permitted until 2025-10-18 09:17:36.780816338 +0000 UTC m=+5.646883828 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kdk97" (UniqueName: "kubernetes.io/projected/2c82849a-5511-43a1-a300-a7f46df288ec-kube-api-access-kdk97") pod "kindnet-qmlcq" (UID: "2c82849a-5511-43a1-a300-a7f46df288ec") : configmap "kube-root-ca.crt" not found
	Oct 18 09:17:36 newest-cni-444637 kubelet[1344]: E1018 09:17:36.280916    1344 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 18 09:17:36 newest-cni-444637 kubelet[1344]: E1018 09:17:36.280946    1344 projected.go:196] Error preparing data for projected volume kube-api-access-pflrm for pod kube-system/kube-proxy-hbkn5: configmap "kube-root-ca.crt" not found
	Oct 18 09:17:36 newest-cni-444637 kubelet[1344]: E1018 09:17:36.281033    1344 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d70417da-43f2-4d8c-a088-07cea5225c34-kube-api-access-pflrm podName:d70417da-43f2-4d8c-a088-07cea5225c34 nodeName:}" failed. No retries permitted until 2025-10-18 09:17:36.781009771 +0000 UTC m=+5.647077257 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pflrm" (UniqueName: "kubernetes.io/projected/d70417da-43f2-4d8c-a088-07cea5225c34-kube-api-access-pflrm") pod "kube-proxy-hbkn5" (UID: "d70417da-43f2-4d8c-a088-07cea5225c34") : configmap "kube-root-ca.crt" not found
	Oct 18 09:17:36 newest-cni-444637 kubelet[1344]: E1018 09:17:36.880703    1344 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 18 09:17:36 newest-cni-444637 kubelet[1344]: E1018 09:17:36.880750    1344 projected.go:196] Error preparing data for projected volume kube-api-access-pflrm for pod kube-system/kube-proxy-hbkn5: configmap "kube-root-ca.crt" not found
	Oct 18 09:17:36 newest-cni-444637 kubelet[1344]: E1018 09:17:36.880821    1344 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d70417da-43f2-4d8c-a088-07cea5225c34-kube-api-access-pflrm podName:d70417da-43f2-4d8c-a088-07cea5225c34 nodeName:}" failed. No retries permitted until 2025-10-18 09:17:37.880799954 +0000 UTC m=+6.746867440 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-pflrm" (UniqueName: "kubernetes.io/projected/d70417da-43f2-4d8c-a088-07cea5225c34-kube-api-access-pflrm") pod "kube-proxy-hbkn5" (UID: "d70417da-43f2-4d8c-a088-07cea5225c34") : configmap "kube-root-ca.crt" not found
	Oct 18 09:17:36 newest-cni-444637 kubelet[1344]: E1018 09:17:36.880703    1344 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 18 09:17:36 newest-cni-444637 kubelet[1344]: E1018 09:17:36.880854    1344 projected.go:196] Error preparing data for projected volume kube-api-access-kdk97 for pod kube-system/kindnet-qmlcq: configmap "kube-root-ca.crt" not found
	Oct 18 09:17:36 newest-cni-444637 kubelet[1344]: E1018 09:17:36.880890    1344 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2c82849a-5511-43a1-a300-a7f46df288ec-kube-api-access-kdk97 podName:2c82849a-5511-43a1-a300-a7f46df288ec nodeName:}" failed. No retries permitted until 2025-10-18 09:17:37.880879389 +0000 UTC m=+6.746946873 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-kdk97" (UniqueName: "kubernetes.io/projected/2c82849a-5511-43a1-a300-a7f46df288ec-kube-api-access-kdk97") pod "kindnet-qmlcq" (UID: "2c82849a-5511-43a1-a300-a7f46df288ec") : configmap "kube-root-ca.crt" not found
	Oct 18 09:17:38 newest-cni-444637 kubelet[1344]: I1018 09:17:38.313971    1344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hbkn5" podStartSLOduration=2.313945884 podStartE2EDuration="2.313945884s" podCreationTimestamp="2025-10-18 09:17:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:17:38.313593986 +0000 UTC m=+7.179661478" watchObservedRunningTime="2025-10-18 09:17:38.313945884 +0000 UTC m=+7.180013377"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-444637 -n newest-cni-444637
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-444637 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-gc5dd storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-444637 describe pod coredns-66bc5c9577-gc5dd storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-444637 describe pod coredns-66bc5c9577-gc5dd storage-provisioner: exit status 1 (63.782561ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-gc5dd" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-444637 describe pod coredns-66bc5c9577-gc5dd storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.49s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (6.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-444637 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-444637 --alsologtostderr -v=1: exit status 80 (2.437948612s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-444637 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:18:05.540181  332306 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:18:05.540472  332306 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:18:05.540484  332306 out.go:374] Setting ErrFile to fd 2...
	I1018 09:18:05.540490  332306 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:18:05.540756  332306 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-5897/.minikube/bin
	I1018 09:18:05.541090  332306 out.go:368] Setting JSON to false
	I1018 09:18:05.541137  332306 mustload.go:65] Loading cluster: newest-cni-444637
	I1018 09:18:05.541550  332306 config.go:182] Loaded profile config "newest-cni-444637": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:18:05.542016  332306 cli_runner.go:164] Run: docker container inspect newest-cni-444637 --format={{.State.Status}}
	I1018 09:18:05.562084  332306 host.go:66] Checking if "newest-cni-444637" exists ...
	I1018 09:18:05.562477  332306 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:18:05.622889  332306 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-18 09:18:05.611606095 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:18:05.623577  332306 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-444637 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1018 09:18:05.625521  332306 out.go:179] * Pausing node newest-cni-444637 ... 
	I1018 09:18:05.626563  332306 host.go:66] Checking if "newest-cni-444637" exists ...
	I1018 09:18:05.626813  332306 ssh_runner.go:195] Run: systemctl --version
	I1018 09:18:05.626852  332306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:18:05.646099  332306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/newest-cni-444637/id_rsa Username:docker}
	I1018 09:18:05.743367  332306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:18:05.757532  332306 pause.go:52] kubelet running: true
	I1018 09:18:05.757603  332306 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 09:18:05.903394  332306 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 09:18:05.903482  332306 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 09:18:05.973002  332306 cri.go:89] found id: "ac55486a499ee462d3e0e111469c5d5af99e91ca5df597256a68ff492f0d410b"
	I1018 09:18:05.973029  332306 cri.go:89] found id: "12a571301d0c534347c1c7a177bd731868cad6ff75cd5a1c93af4981287ee430"
	I1018 09:18:05.973034  332306 cri.go:89] found id: "014aa61b2a700319893c6b11615bd85925597f0738e1bf960657bba99a7ac5ea"
	I1018 09:18:05.973039  332306 cri.go:89] found id: "b91ae2df424fdafe037b2eea7a39a37f80929e5ab4c76c1169ce7ba3b9a4bdbd"
	I1018 09:18:05.973043  332306 cri.go:89] found id: "49cf2e65f5a6801e31db940684d60041512ed73bbf34778abdfd8025afc8b25b"
	I1018 09:18:05.973050  332306 cri.go:89] found id: "390882244d27208c7b2d7d0538a0ff970ed197d0a63b391f3e1c81bd7b8255df"
	I1018 09:18:05.973053  332306 cri.go:89] found id: ""
	I1018 09:18:05.973124  332306 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:18:05.985555  332306 retry.go:31] will retry after 269.746402ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:18:05Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:18:06.256132  332306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:18:06.270162  332306 pause.go:52] kubelet running: false
	I1018 09:18:06.270219  332306 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 09:18:06.387951  332306 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 09:18:06.388064  332306 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 09:18:06.457226  332306 cri.go:89] found id: "ac55486a499ee462d3e0e111469c5d5af99e91ca5df597256a68ff492f0d410b"
	I1018 09:18:06.457250  332306 cri.go:89] found id: "12a571301d0c534347c1c7a177bd731868cad6ff75cd5a1c93af4981287ee430"
	I1018 09:18:06.457256  332306 cri.go:89] found id: "014aa61b2a700319893c6b11615bd85925597f0738e1bf960657bba99a7ac5ea"
	I1018 09:18:06.457262  332306 cri.go:89] found id: "b91ae2df424fdafe037b2eea7a39a37f80929e5ab4c76c1169ce7ba3b9a4bdbd"
	I1018 09:18:06.457267  332306 cri.go:89] found id: "49cf2e65f5a6801e31db940684d60041512ed73bbf34778abdfd8025afc8b25b"
	I1018 09:18:06.457272  332306 cri.go:89] found id: "390882244d27208c7b2d7d0538a0ff970ed197d0a63b391f3e1c81bd7b8255df"
	I1018 09:18:06.457277  332306 cri.go:89] found id: ""
	I1018 09:18:06.457317  332306 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:18:06.469836  332306 retry.go:31] will retry after 333.718627ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:18:06Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:18:06.804447  332306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:18:06.817666  332306 pause.go:52] kubelet running: false
	I1018 09:18:06.817718  332306 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 09:18:06.936209  332306 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 09:18:06.936278  332306 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 09:18:07.003600  332306 cri.go:89] found id: "ac55486a499ee462d3e0e111469c5d5af99e91ca5df597256a68ff492f0d410b"
	I1018 09:18:07.003627  332306 cri.go:89] found id: "12a571301d0c534347c1c7a177bd731868cad6ff75cd5a1c93af4981287ee430"
	I1018 09:18:07.003631  332306 cri.go:89] found id: "014aa61b2a700319893c6b11615bd85925597f0738e1bf960657bba99a7ac5ea"
	I1018 09:18:07.003634  332306 cri.go:89] found id: "b91ae2df424fdafe037b2eea7a39a37f80929e5ab4c76c1169ce7ba3b9a4bdbd"
	I1018 09:18:07.003637  332306 cri.go:89] found id: "49cf2e65f5a6801e31db940684d60041512ed73bbf34778abdfd8025afc8b25b"
	I1018 09:18:07.003640  332306 cri.go:89] found id: "390882244d27208c7b2d7d0538a0ff970ed197d0a63b391f3e1c81bd7b8255df"
	I1018 09:18:07.003643  332306 cri.go:89] found id: ""
	I1018 09:18:07.003689  332306 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:18:07.015763  332306 retry.go:31] will retry after 690.1206ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:18:07Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:18:07.706657  332306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:18:07.720444  332306 pause.go:52] kubelet running: false
	I1018 09:18:07.720496  332306 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 09:18:07.834126  332306 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 09:18:07.834209  332306 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 09:18:07.904645  332306 cri.go:89] found id: "ac55486a499ee462d3e0e111469c5d5af99e91ca5df597256a68ff492f0d410b"
	I1018 09:18:07.904670  332306 cri.go:89] found id: "12a571301d0c534347c1c7a177bd731868cad6ff75cd5a1c93af4981287ee430"
	I1018 09:18:07.904675  332306 cri.go:89] found id: "014aa61b2a700319893c6b11615bd85925597f0738e1bf960657bba99a7ac5ea"
	I1018 09:18:07.904679  332306 cri.go:89] found id: "b91ae2df424fdafe037b2eea7a39a37f80929e5ab4c76c1169ce7ba3b9a4bdbd"
	I1018 09:18:07.904683  332306 cri.go:89] found id: "49cf2e65f5a6801e31db940684d60041512ed73bbf34778abdfd8025afc8b25b"
	I1018 09:18:07.904687  332306 cri.go:89] found id: "390882244d27208c7b2d7d0538a0ff970ed197d0a63b391f3e1c81bd7b8255df"
	I1018 09:18:07.904692  332306 cri.go:89] found id: ""
	I1018 09:18:07.904741  332306 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:18:07.918833  332306 out.go:203] 
	W1018 09:18:07.920254  332306 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:18:07Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:18:07Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 09:18:07.920282  332306 out.go:285] * 
	* 
	W1018 09:18:07.924377  332306 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 09:18:07.925980  332306 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-444637 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-444637
helpers_test.go:243: (dbg) docker inspect newest-cni-444637:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "891566b377adcaa9cc2816d33e76914e19937c48e9ad4928e80005a493fd9941",
	        "Created": "2025-10-18T09:17:13.777714578Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 330393,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T09:17:54.667607343Z",
	            "FinishedAt": "2025-10-18T09:17:53.054918167Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/891566b377adcaa9cc2816d33e76914e19937c48e9ad4928e80005a493fd9941/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/891566b377adcaa9cc2816d33e76914e19937c48e9ad4928e80005a493fd9941/hostname",
	        "HostsPath": "/var/lib/docker/containers/891566b377adcaa9cc2816d33e76914e19937c48e9ad4928e80005a493fd9941/hosts",
	        "LogPath": "/var/lib/docker/containers/891566b377adcaa9cc2816d33e76914e19937c48e9ad4928e80005a493fd9941/891566b377adcaa9cc2816d33e76914e19937c48e9ad4928e80005a493fd9941-json.log",
	        "Name": "/newest-cni-444637",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-444637:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-444637",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "891566b377adcaa9cc2816d33e76914e19937c48e9ad4928e80005a493fd9941",
	                "LowerDir": "/var/lib/docker/overlay2/299ed2d3b858283b2c8206fda315e99ed5d127ab10dbdaecabdcb4955ace8dbc-init/diff:/var/lib/docker/overlay2/76f783f469ac4c930bc111d7df4bd2b3a57bdcd762971c7ce0ba7a7b959771a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/299ed2d3b858283b2c8206fda315e99ed5d127ab10dbdaecabdcb4955ace8dbc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/299ed2d3b858283b2c8206fda315e99ed5d127ab10dbdaecabdcb4955ace8dbc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/299ed2d3b858283b2c8206fda315e99ed5d127ab10dbdaecabdcb4955ace8dbc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-444637",
	                "Source": "/var/lib/docker/volumes/newest-cni-444637/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-444637",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-444637",
	                "name.minikube.sigs.k8s.io": "newest-cni-444637",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6eade8f3ef5f4aab3a7759e779f30593ae1de7dbe971ffea92f10612f0f06184",
	            "SandboxKey": "/var/run/docker/netns/6eade8f3ef5f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-444637": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:af:6f:0f:e5:bb",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "dd9c4a8b133b5630e17e447400d74046fdd59b021f81b3128919b2fa8ae8dbbe",
	                    "EndpointID": "0fd6cda962ca3b0dfbdcf9d50b3a275aaa14b1dba26c704c76ba71db05464b91",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-444637",
	                        "891566b377ad"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-444637 -n newest-cni-444637
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-444637 -n newest-cni-444637: exit status 2 (321.889465ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-444637 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p no-preload-031066 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-031066            │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ start   │ -p no-preload-031066 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-031066            │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:17 UTC │
	│ addons  │ enable metrics-server -p embed-certs-880603 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-880603           │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │                     │
	│ stop    │ -p embed-certs-880603 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-880603           │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:17 UTC │
	│ image   │ old-k8s-version-951975 image list --format=json                                                                                                                                                                                               │ old-k8s-version-951975       │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ pause   │ -p old-k8s-version-951975 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-951975       │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │                     │
	│ delete  │ -p old-k8s-version-951975                                                                                                                                                                                                                     │ old-k8s-version-951975       │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ delete  │ -p old-k8s-version-951975                                                                                                                                                                                                                     │ old-k8s-version-951975       │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ start   │ -p newest-cni-444637 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-444637            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-986220 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-986220 │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-880603 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-880603           │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ start   │ -p embed-certs-880603 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-880603           │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ stop    │ -p default-k8s-diff-port-986220 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-986220 │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ image   │ no-preload-031066 image list --format=json                                                                                                                                                                                                    │ no-preload-031066            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ pause   │ -p no-preload-031066 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-031066            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-986220 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-986220 │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ start   │ -p default-k8s-diff-port-986220 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-986220 │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │                     │
	│ delete  │ -p no-preload-031066                                                                                                                                                                                                                          │ no-preload-031066            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ delete  │ -p no-preload-031066                                                                                                                                                                                                                          │ no-preload-031066            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ addons  │ enable metrics-server -p newest-cni-444637 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-444637            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │                     │
	│ stop    │ -p newest-cni-444637 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-444637            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ addons  │ enable dashboard -p newest-cni-444637 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-444637            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ start   │ -p newest-cni-444637 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-444637            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:18 UTC │
	│ image   │ newest-cni-444637 image list --format=json                                                                                                                                                                                                    │ newest-cni-444637            │ jenkins │ v1.37.0 │ 18 Oct 25 09:18 UTC │ 18 Oct 25 09:18 UTC │
	│ pause   │ -p newest-cni-444637 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-444637            │ jenkins │ v1.37.0 │ 18 Oct 25 09:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 09:17:54
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 09:17:54.427005  330193 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:17:54.427270  330193 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:17:54.427281  330193 out.go:374] Setting ErrFile to fd 2...
	I1018 09:17:54.427287  330193 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:17:54.427525  330193 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-5897/.minikube/bin
	I1018 09:17:54.428050  330193 out.go:368] Setting JSON to false
	I1018 09:17:54.429280  330193 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3622,"bootTime":1760775452,"procs":312,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 09:17:54.429387  330193 start.go:141] virtualization: kvm guest
	I1018 09:17:54.431635  330193 out.go:179] * [newest-cni-444637] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 09:17:54.432952  330193 notify.go:220] Checking for updates...
	I1018 09:17:54.432979  330193 out.go:179]   - MINIKUBE_LOCATION=21767
	I1018 09:17:54.434488  330193 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:17:54.435897  330193 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-5897/kubeconfig
	I1018 09:17:54.437111  330193 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-5897/.minikube
	I1018 09:17:54.438264  330193 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 09:17:54.439545  330193 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:17:54.441204  330193 config.go:182] Loaded profile config "newest-cni-444637": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:17:54.441727  330193 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:17:54.467746  330193 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 09:17:54.467827  330193 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:17:54.527403  330193 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:66 SystemTime:2025-10-18 09:17:54.515566485 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:17:54.527559  330193 docker.go:318] overlay module found
	I1018 09:17:54.529436  330193 out.go:179] * Using the docker driver based on existing profile
	I1018 09:17:54.530557  330193 start.go:305] selected driver: docker
	I1018 09:17:54.530578  330193 start.go:925] validating driver "docker" against &{Name:newest-cni-444637 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-444637 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:17:54.530680  330193 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:17:54.531357  330193 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:17:54.591156  330193 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:66 SystemTime:2025-10-18 09:17:54.580755477 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:17:54.591532  330193 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 09:17:54.591566  330193 cni.go:84] Creating CNI manager for ""
	I1018 09:17:54.591617  330193 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:17:54.591683  330193 start.go:349] cluster config:
	{Name:newest-cni-444637 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-444637 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:17:54.593449  330193 out.go:179] * Starting "newest-cni-444637" primary control-plane node in "newest-cni-444637" cluster
	I1018 09:17:54.594724  330193 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 09:17:54.596122  330193 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 09:17:54.597292  330193 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:17:54.597335  330193 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-5897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 09:17:54.597376  330193 cache.go:58] Caching tarball of preloaded images
	I1018 09:17:54.597366  330193 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 09:17:54.597499  330193 preload.go:233] Found /home/jenkins/minikube-integration/21767-5897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 09:17:54.597519  330193 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 09:17:54.597628  330193 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/config.json ...
	I1018 09:17:54.619906  330193 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 09:17:54.619924  330193 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 09:17:54.619939  330193 cache.go:232] Successfully downloaded all kic artifacts
	I1018 09:17:54.619961  330193 start.go:360] acquireMachinesLock for newest-cni-444637: {Name:mkf6974ca6fc7b22cdf212b383f50d3f090ea59b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:17:54.620020  330193 start.go:364] duration metric: took 43.166µs to acquireMachinesLock for "newest-cni-444637"
	I1018 09:17:54.620037  330193 start.go:96] Skipping create...Using existing machine configuration
	I1018 09:17:54.620042  330193 fix.go:54] fixHost starting: 
	I1018 09:17:54.620234  330193 cli_runner.go:164] Run: docker container inspect newest-cni-444637 --format={{.State.Status}}
	I1018 09:17:54.638627  330193 fix.go:112] recreateIfNeeded on newest-cni-444637: state=Stopped err=<nil>
	W1018 09:17:54.638652  330193 fix.go:138] unexpected machine state, will restart: <nil>
	W1018 09:17:51.833553  318609 pod_ready.go:104] pod "coredns-66bc5c9577-7fnw7" is not "Ready", error: <nil>
	W1018 09:17:53.833757  318609 pod_ready.go:104] pod "coredns-66bc5c9577-7fnw7" is not "Ready", error: <nil>
	W1018 09:17:56.034833  324191 pod_ready.go:104] pod "coredns-66bc5c9577-bpcsk" is not "Ready", error: <nil>
	W1018 09:17:58.534991  324191 pod_ready.go:104] pod "coredns-66bc5c9577-bpcsk" is not "Ready", error: <nil>
	I1018 09:17:54.640543  330193 out.go:252] * Restarting existing docker container for "newest-cni-444637" ...
	I1018 09:17:54.640644  330193 cli_runner.go:164] Run: docker start newest-cni-444637
	I1018 09:17:54.903916  330193 cli_runner.go:164] Run: docker container inspect newest-cni-444637 --format={{.State.Status}}
	I1018 09:17:54.923445  330193 kic.go:430] container "newest-cni-444637" state is running.
	I1018 09:17:54.923919  330193 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-444637
	I1018 09:17:54.944878  330193 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/config.json ...
	I1018 09:17:54.945143  330193 machine.go:93] provisionDockerMachine start ...
	I1018 09:17:54.945221  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:17:54.965135  330193 main.go:141] libmachine: Using SSH client type: native
	I1018 09:17:54.965422  330193 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1018 09:17:54.965438  330193 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:17:54.966008  330193 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59674->127.0.0.1:33133: read: connection reset by peer
	I1018 09:17:58.102821  330193 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-444637
	
	I1018 09:17:58.102846  330193 ubuntu.go:182] provisioning hostname "newest-cni-444637"
	I1018 09:17:58.102902  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:17:58.121992  330193 main.go:141] libmachine: Using SSH client type: native
	I1018 09:17:58.122251  330193 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1018 09:17:58.122274  330193 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-444637 && echo "newest-cni-444637" | sudo tee /etc/hostname
	I1018 09:17:58.271611  330193 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-444637
	
	I1018 09:17:58.271696  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:17:58.295116  330193 main.go:141] libmachine: Using SSH client type: native
	I1018 09:17:58.295331  330193 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1018 09:17:58.295366  330193 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-444637' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-444637/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-444637' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:17:58.435338  330193 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:17:58.435406  330193 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-5897/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-5897/.minikube}
	I1018 09:17:58.435457  330193 ubuntu.go:190] setting up certificates
	I1018 09:17:58.435470  330193 provision.go:84] configureAuth start
	I1018 09:17:58.435550  330193 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-444637
	I1018 09:17:58.454683  330193 provision.go:143] copyHostCerts
	I1018 09:17:58.454758  330193 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem, removing ...
	I1018 09:17:58.454789  330193 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem
	I1018 09:17:58.454878  330193 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem (1675 bytes)
	I1018 09:17:58.455021  330193 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem, removing ...
	I1018 09:17:58.455032  330193 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem
	I1018 09:17:58.455077  330193 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem (1078 bytes)
	I1018 09:17:58.455176  330193 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem, removing ...
	I1018 09:17:58.455185  330193 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem
	I1018 09:17:58.455229  330193 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem (1123 bytes)
	I1018 09:17:58.455323  330193 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem org=jenkins.newest-cni-444637 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-444637]
	I1018 09:17:58.651717  330193 provision.go:177] copyRemoteCerts
	I1018 09:17:58.651791  330193 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:17:58.651850  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:17:58.670990  330193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/newest-cni-444637/id_rsa Username:docker}
	I1018 09:17:58.769295  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:17:58.788403  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 09:17:58.807495  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 09:17:58.826308  330193 provision.go:87] duration metric: took 390.822036ms to configureAuth
	I1018 09:17:58.826335  330193 ubuntu.go:206] setting minikube options for container-runtime
	I1018 09:17:58.826534  330193 config.go:182] Loaded profile config "newest-cni-444637": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:17:58.826624  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:17:58.845940  330193 main.go:141] libmachine: Using SSH client type: native
	I1018 09:17:58.846169  330193 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1018 09:17:58.846191  330193 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:17:59.117215  330193 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:17:59.117238  330193 machine.go:96] duration metric: took 4.172078969s to provisionDockerMachine
	I1018 09:17:59.117253  330193 start.go:293] postStartSetup for "newest-cni-444637" (driver="docker")
	I1018 09:17:59.117266  330193 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:17:59.117338  330193 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:17:59.117401  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:17:59.136996  330193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/newest-cni-444637/id_rsa Username:docker}
	I1018 09:17:59.235549  330193 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:17:59.239452  330193 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 09:17:59.239483  330193 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 09:17:59.239505  330193 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-5897/.minikube/addons for local assets ...
	I1018 09:17:59.239563  330193 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-5897/.minikube/files for local assets ...
	I1018 09:17:59.239658  330193 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem -> 93942.pem in /etc/ssl/certs
	I1018 09:17:59.239788  330193 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:17:59.248379  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem --> /etc/ssl/certs/93942.pem (1708 bytes)
	I1018 09:17:59.268012  330193 start.go:296] duration metric: took 150.737252ms for postStartSetup
	I1018 09:17:59.268099  330193 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:17:59.268146  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:17:59.287401  330193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/newest-cni-444637/id_rsa Username:docker}
	I1018 09:17:59.382795  330193 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 09:17:59.388305  330193 fix.go:56] duration metric: took 4.768253133s for fixHost
	I1018 09:17:59.388338  330193 start.go:83] releasing machines lock for "newest-cni-444637", held for 4.76830641s
	I1018 09:17:59.388481  330193 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-444637
	I1018 09:17:59.407756  330193 ssh_runner.go:195] Run: cat /version.json
	I1018 09:17:59.407798  330193 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:17:59.407876  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:17:59.407803  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	W1018 09:17:56.333478  318609 pod_ready.go:104] pod "coredns-66bc5c9577-7fnw7" is not "Ready", error: <nil>
	I1018 09:17:58.333556  318609 pod_ready.go:94] pod "coredns-66bc5c9577-7fnw7" is "Ready"
	I1018 09:17:58.333585  318609 pod_ready.go:86] duration metric: took 36.506179321s for pod "coredns-66bc5c9577-7fnw7" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:58.336410  318609 pod_ready.go:83] waiting for pod "etcd-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:58.341932  318609 pod_ready.go:94] pod "etcd-embed-certs-880603" is "Ready"
	I1018 09:17:58.341964  318609 pod_ready.go:86] duration metric: took 5.525225ms for pod "etcd-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:58.344669  318609 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:58.349852  318609 pod_ready.go:94] pod "kube-apiserver-embed-certs-880603" is "Ready"
	I1018 09:17:58.349882  318609 pod_ready.go:86] duration metric: took 5.170321ms for pod "kube-apiserver-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:58.352067  318609 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:58.532002  318609 pod_ready.go:94] pod "kube-controller-manager-embed-certs-880603" is "Ready"
	I1018 09:17:58.532034  318609 pod_ready.go:86] duration metric: took 179.946406ms for pod "kube-controller-manager-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:58.732243  318609 pod_ready.go:83] waiting for pod "kube-proxy-k4kcs" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:59.131632  318609 pod_ready.go:94] pod "kube-proxy-k4kcs" is "Ready"
	I1018 09:17:59.131665  318609 pod_ready.go:86] duration metric: took 399.394452ms for pod "kube-proxy-k4kcs" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:59.332088  318609 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:59.734805  318609 pod_ready.go:94] pod "kube-scheduler-embed-certs-880603" is "Ready"
	I1018 09:17:59.734842  318609 pod_ready.go:86] duration metric: took 402.724813ms for pod "kube-scheduler-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:59.734856  318609 pod_ready.go:40] duration metric: took 37.912005765s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:17:59.783224  318609 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 09:17:59.785136  318609 out.go:179] * Done! kubectl is now configured to use "embed-certs-880603" cluster and "default" namespace by default
	I1018 09:17:59.428145  330193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/newest-cni-444637/id_rsa Username:docker}
	I1018 09:17:59.430455  330193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/newest-cni-444637/id_rsa Username:docker}
	I1018 09:17:59.580030  330193 ssh_runner.go:195] Run: systemctl --version
	I1018 09:17:59.587085  330193 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:17:59.625510  330193 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:17:59.630784  330193 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:17:59.630846  330193 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:17:59.639622  330193 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 09:17:59.639650  330193 start.go:495] detecting cgroup driver to use...
	I1018 09:17:59.639695  330193 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 09:17:59.639752  330193 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:17:59.654825  330193 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:17:59.668280  330193 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:17:59.668366  330193 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:17:59.683973  330193 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:17:59.698385  330193 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:17:59.790586  330193 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:17:59.892076  330193 docker.go:234] disabling docker service ...
	I1018 09:17:59.892147  330193 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:17:59.908881  330193 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:17:59.922861  330193 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:18:00.012767  330193 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:18:00.112051  330193 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:18:00.125686  330193 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:18:00.142184  330193 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 09:18:00.142248  330193 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:18:00.153446  330193 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 09:18:00.153510  330193 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:18:00.163772  330193 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:18:00.173529  330193 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:18:00.183180  330193 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:18:00.192357  330193 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:18:00.202160  330193 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:18:00.211313  330193 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:18:00.221003  330193 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:18:00.229269  330193 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:18:00.238137  330193 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:18:00.320620  330193 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:18:00.435033  330193 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:18:00.435106  330193 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:18:00.439539  330193 start.go:563] Will wait 60s for crictl version
	I1018 09:18:00.439606  330193 ssh_runner.go:195] Run: which crictl
	I1018 09:18:00.443682  330193 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 09:18:00.469987  330193 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 09:18:00.470070  330193 ssh_runner.go:195] Run: crio --version
	I1018 09:18:00.500186  330193 ssh_runner.go:195] Run: crio --version
	I1018 09:18:00.531772  330193 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 09:18:00.533155  330193 cli_runner.go:164] Run: docker network inspect newest-cni-444637 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:18:00.552284  330193 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1018 09:18:00.556833  330193 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:18:00.569469  330193 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1018 09:18:00.570643  330193 kubeadm.go:883] updating cluster {Name:newest-cni-444637 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-444637 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:18:00.570761  330193 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:18:00.570826  330193 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:18:00.604611  330193 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:18:00.604633  330193 crio.go:433] Images already preloaded, skipping extraction
	I1018 09:18:00.604679  330193 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:18:00.632395  330193 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:18:00.632438  330193 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:18:00.632446  330193 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1018 09:18:00.632555  330193 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-444637 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-444637 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:18:00.632630  330193 ssh_runner.go:195] Run: crio config
	I1018 09:18:00.683711  330193 cni.go:84] Creating CNI manager for ""
	I1018 09:18:00.683732  330193 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:18:00.683746  330193 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1018 09:18:00.683770  330193 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-444637 NodeName:newest-cni-444637 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:18:00.683897  330193 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-444637"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:18:00.683961  330193 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 09:18:00.693538  330193 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:18:00.693611  330193 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:18:00.701785  330193 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1018 09:18:00.715623  330193 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:18:00.729315  330193 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1018 09:18:00.742706  330193 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1018 09:18:00.746993  330193 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:18:00.758274  330193 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:18:00.846197  330193 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:18:00.874953  330193 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637 for IP: 192.168.103.2
	I1018 09:18:00.874980  330193 certs.go:195] generating shared ca certs ...
	I1018 09:18:00.875000  330193 certs.go:227] acquiring lock for ca certs: {Name:mk550b60d986fbbdf7b5e0015c56234b739f3162 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:18:00.875152  330193 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-5897/.minikube/ca.key
	I1018 09:18:00.875197  330193 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.key
	I1018 09:18:00.875207  330193 certs.go:257] generating profile certs ...
	I1018 09:18:00.875295  330193 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/client.key
	I1018 09:18:00.875391  330193 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/apiserver.key.d9d366ba
	I1018 09:18:00.875439  330193 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/proxy-client.key
	I1018 09:18:00.875557  330193 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/9394.pem (1338 bytes)
	W1018 09:18:00.875586  330193 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-5897/.minikube/certs/9394_empty.pem, impossibly tiny 0 bytes
	I1018 09:18:00.875596  330193 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 09:18:00.875619  330193 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:18:00.875641  330193 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:18:00.875661  330193 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem (1675 bytes)
	I1018 09:18:00.875704  330193 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem (1708 bytes)
	I1018 09:18:00.876245  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:18:00.896645  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 09:18:00.916475  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:18:00.937413  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 09:18:00.962164  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1018 09:18:00.982149  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 09:18:01.001065  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:18:01.021602  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 09:18:01.041260  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:18:01.060553  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/certs/9394.pem --> /usr/share/ca-certificates/9394.pem (1338 bytes)
	I1018 09:18:01.080521  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem --> /usr/share/ca-certificates/93942.pem (1708 bytes)
	I1018 09:18:01.099406  330193 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:18:01.112902  330193 ssh_runner.go:195] Run: openssl version
	I1018 09:18:01.119558  330193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9394.pem && ln -fs /usr/share/ca-certificates/9394.pem /etc/ssl/certs/9394.pem"
	I1018 09:18:01.128761  330193 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9394.pem
	I1018 09:18:01.133075  330193 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 08:35 /usr/share/ca-certificates/9394.pem
	I1018 09:18:01.133130  330193 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9394.pem
	I1018 09:18:01.169581  330193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9394.pem /etc/ssl/certs/51391683.0"
	I1018 09:18:01.178326  330193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93942.pem && ln -fs /usr/share/ca-certificates/93942.pem /etc/ssl/certs/93942.pem"
	I1018 09:18:01.187653  330193 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93942.pem
	I1018 09:18:01.191858  330193 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 08:35 /usr/share/ca-certificates/93942.pem
	I1018 09:18:01.191912  330193 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93942.pem
	I1018 09:18:01.227900  330193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93942.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:18:01.236865  330193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:18:01.245974  330193 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:18:01.250554  330193 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:18:01.250615  330193 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:18:01.285905  330193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:18:01.295059  330193 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:18:01.299170  330193 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 09:18:01.334401  330193 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 09:18:01.369411  330193 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 09:18:01.417245  330193 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 09:18:01.463956  330193 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 09:18:01.519260  330193 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 09:18:01.564643  330193 kubeadm.go:400] StartCluster: {Name:newest-cni-444637 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-444637 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:18:01.564725  330193 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:18:01.564799  330193 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:18:01.596025  330193 cri.go:89] found id: "014aa61b2a700319893c6b11615bd85925597f0738e1bf960657bba99a7ac5ea"
	I1018 09:18:01.596053  330193 cri.go:89] found id: "b91ae2df424fdafe037b2eea7a39a37f80929e5ab4c76c1169ce7ba3b9a4bdbd"
	I1018 09:18:01.596059  330193 cri.go:89] found id: "49cf2e65f5a6801e31db940684d60041512ed73bbf34778abdfd8025afc8b25b"
	I1018 09:18:01.596064  330193 cri.go:89] found id: "390882244d27208c7b2d7d0538a0ff970ed197d0a63b391f3e1c81bd7b8255df"
	I1018 09:18:01.596069  330193 cri.go:89] found id: ""
	I1018 09:18:01.596114  330193 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 09:18:01.609602  330193 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:18:01Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:18:01.609687  330193 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:18:01.619278  330193 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 09:18:01.619297  330193 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 09:18:01.619376  330193 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 09:18:01.628525  330193 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 09:18:01.629710  330193 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-444637" does not appear in /home/jenkins/minikube-integration/21767-5897/kubeconfig
	I1018 09:18:01.630508  330193 kubeconfig.go:62] /home/jenkins/minikube-integration/21767-5897/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-444637" cluster setting kubeconfig missing "newest-cni-444637" context setting]
	I1018 09:18:01.631708  330193 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/kubeconfig: {Name:mkbdf22e0f6c3f9f36e4cde3352b620d43ace448 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:18:01.633868  330193 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 09:18:01.643225  330193 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.103.2
	I1018 09:18:01.643268  330193 kubeadm.go:601] duration metric: took 23.964839ms to restartPrimaryControlPlane
	I1018 09:18:01.643282  330193 kubeadm.go:402] duration metric: took 78.647978ms to StartCluster
	I1018 09:18:01.643303  330193 settings.go:142] acquiring lock: {Name:mk177870d6cf7000f95346d8b9c104ade730278a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:18:01.643398  330193 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-5897/kubeconfig
	I1018 09:18:01.645409  330193 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/kubeconfig: {Name:mkbdf22e0f6c3f9f36e4cde3352b620d43ace448 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:18:01.645688  330193 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:18:01.645769  330193 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 09:18:01.645862  330193 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-444637"
	I1018 09:18:01.645882  330193 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-444637"
	W1018 09:18:01.645893  330193 addons.go:247] addon storage-provisioner should already be in state true
	I1018 09:18:01.645893  330193 addons.go:69] Setting dashboard=true in profile "newest-cni-444637"
	I1018 09:18:01.645921  330193 host.go:66] Checking if "newest-cni-444637" exists ...
	I1018 09:18:01.645934  330193 addons.go:238] Setting addon dashboard=true in "newest-cni-444637"
	W1018 09:18:01.645945  330193 addons.go:247] addon dashboard should already be in state true
	I1018 09:18:01.645945  330193 config.go:182] Loaded profile config "newest-cni-444637": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:18:01.645948  330193 addons.go:69] Setting default-storageclass=true in profile "newest-cni-444637"
	I1018 09:18:01.645973  330193 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-444637"
	I1018 09:18:01.645980  330193 host.go:66] Checking if "newest-cni-444637" exists ...
	I1018 09:18:01.646303  330193 cli_runner.go:164] Run: docker container inspect newest-cni-444637 --format={{.State.Status}}
	I1018 09:18:01.646463  330193 cli_runner.go:164] Run: docker container inspect newest-cni-444637 --format={{.State.Status}}
	I1018 09:18:01.646481  330193 cli_runner.go:164] Run: docker container inspect newest-cni-444637 --format={{.State.Status}}
	I1018 09:18:01.647698  330193 out.go:179] * Verifying Kubernetes components...
	I1018 09:18:01.649210  330193 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:18:01.673812  330193 addons.go:238] Setting addon default-storageclass=true in "newest-cni-444637"
	W1018 09:18:01.673837  330193 addons.go:247] addon default-storageclass should already be in state true
	I1018 09:18:01.673877  330193 host.go:66] Checking if "newest-cni-444637" exists ...
	I1018 09:18:01.674375  330193 cli_runner.go:164] Run: docker container inspect newest-cni-444637 --format={{.State.Status}}
	I1018 09:18:01.674516  330193 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:18:01.678901  330193 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:18:01.678924  330193 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 09:18:01.678985  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:18:01.679140  330193 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1018 09:18:01.680475  330193 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1018 09:18:01.681672  330193 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1018 09:18:01.681729  330193 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1018 09:18:01.681827  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:18:01.707736  330193 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 09:18:01.707766  330193 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 09:18:01.707826  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:18:01.713270  330193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/newest-cni-444637/id_rsa Username:docker}
	I1018 09:18:01.719016  330193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/newest-cni-444637/id_rsa Username:docker}
	I1018 09:18:01.734187  330193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/newest-cni-444637/id_rsa Username:docker}
	I1018 09:18:01.812631  330193 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:18:01.828229  330193 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:18:01.828317  330193 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:18:01.829858  330193 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:18:01.835854  330193 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1018 09:18:01.835874  330193 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1018 09:18:01.845491  330193 api_server.go:72] duration metric: took 199.769202ms to wait for apiserver process to appear ...
	I1018 09:18:01.845522  330193 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:18:01.845544  330193 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1018 09:18:01.852363  330193 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 09:18:01.854253  330193 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1018 09:18:01.854275  330193 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1018 09:18:01.872324  330193 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1018 09:18:01.872363  330193 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1018 09:18:01.891549  330193 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1018 09:18:01.891576  330193 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1018 09:18:01.910545  330193 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1018 09:18:01.910574  330193 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1018 09:18:01.928312  330193 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1018 09:18:01.928337  330193 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1018 09:18:01.942869  330193 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1018 09:18:01.942897  330193 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1018 09:18:01.957264  330193 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1018 09:18:01.957287  330193 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1018 09:18:01.971834  330193 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 09:18:01.971871  330193 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1018 09:18:01.988808  330193 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 09:18:03.360064  330193 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1018 09:18:03.360099  330193 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1018 09:18:03.360117  330193 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1018 09:18:03.416525  330193 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W1018 09:18:03.416558  330193 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I1018 09:18:03.845768  330193 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1018 09:18:03.850882  330193 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 09:18:03.850913  330193 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 09:18:03.925688  330193 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.095784279s)
	I1018 09:18:03.925778  330193 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.073377378s)
	I1018 09:18:03.925913  330193 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.937061029s)
	I1018 09:18:03.929127  330193 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-444637 addons enable metrics-server
	
	I1018 09:18:03.937380  330193 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1018 09:18:01.035250  324191 pod_ready.go:104] pod "coredns-66bc5c9577-bpcsk" is not "Ready", error: <nil>
	W1018 09:18:03.035670  324191 pod_ready.go:104] pod "coredns-66bc5c9577-bpcsk" is not "Ready", error: <nil>
	I1018 09:18:03.938934  330193 addons.go:514] duration metric: took 2.293172614s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1018 09:18:04.346493  330193 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1018 09:18:04.351148  330193 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 09:18:04.351178  330193 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 09:18:04.845878  330193 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1018 09:18:04.850252  330193 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1018 09:18:04.851396  330193 api_server.go:141] control plane version: v1.34.1
	I1018 09:18:04.851430  330193 api_server.go:131] duration metric: took 3.005900151s to wait for apiserver health ...
	I1018 09:18:04.851440  330193 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:18:04.855053  330193 system_pods.go:59] 8 kube-system pods found
	I1018 09:18:04.855092  330193 system_pods.go:61] "coredns-66bc5c9577-gc5dd" [7fab8a8d-bdb4-47d4-bf7d-d03341018666] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 09:18:04.855100  330193 system_pods.go:61] "etcd-newest-cni-444637" [b54d61ad-b52d-4343-ba3a-a64b03934319] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 09:18:04.855111  330193 system_pods.go:61] "kindnet-qmlcq" [2c82849a-5511-43a1-a300-a7f46df288ec] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1018 09:18:04.855117  330193 system_pods.go:61] "kube-apiserver-newest-cni-444637" [a9136c1f-8962-45f7-b005-05bd3f856403] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 09:18:04.855124  330193 system_pods.go:61] "kube-controller-manager-newest-cni-444637" [b8d840d7-04c3-495c-aafa-cc8a06e58f06] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 09:18:04.855130  330193 system_pods.go:61] "kube-proxy-hbkn5" [d70417da-43f2-4d8c-a088-07cea5225c34] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1018 09:18:04.855138  330193 system_pods.go:61] "kube-scheduler-newest-cni-444637" [175527c5-4260-4e39-be83-4c36417f3cbe] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 09:18:04.855142  330193 system_pods.go:61] "storage-provisioner" [b0974a78-b6ad-45c3-8241-86f8bb7bc65b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 09:18:04.855151  330193 system_pods.go:74] duration metric: took 3.706424ms to wait for pod list to return data ...
	I1018 09:18:04.855162  330193 default_sa.go:34] waiting for default service account to be created ...
	I1018 09:18:04.857785  330193 default_sa.go:45] found service account: "default"
	I1018 09:18:04.857804  330193 default_sa.go:55] duration metric: took 2.636173ms for default service account to be created ...
	I1018 09:18:04.857817  330193 kubeadm.go:586] duration metric: took 3.212102689s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 09:18:04.857837  330193 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:18:04.860449  330193 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 09:18:04.860472  330193 node_conditions.go:123] node cpu capacity is 8
	I1018 09:18:04.860486  330193 node_conditions.go:105] duration metric: took 2.642504ms to run NodePressure ...
	I1018 09:18:04.860498  330193 start.go:241] waiting for startup goroutines ...
	I1018 09:18:04.860504  330193 start.go:246] waiting for cluster config update ...
	I1018 09:18:04.860514  330193 start.go:255] writing updated cluster config ...
	I1018 09:18:04.860806  330193 ssh_runner.go:195] Run: rm -f paused
	I1018 09:18:04.910604  330193 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 09:18:04.913879  330193 out.go:179] * Done! kubectl is now configured to use "newest-cni-444637" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 09:18:04 newest-cni-444637 crio[522]: time="2025-10-18T09:18:04.250643549Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:18:04 newest-cni-444637 crio[522]: time="2025-10-18T09:18:04.255092831Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=848c943b-9532-4a04-a022-94c7152fc501 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:18:04 newest-cni-444637 crio[522]: time="2025-10-18T09:18:04.255757079Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=656142e0-a275-4f46-a31c-c456a18983d6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:18:04 newest-cni-444637 crio[522]: time="2025-10-18T09:18:04.256665873Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 18 09:18:04 newest-cni-444637 crio[522]: time="2025-10-18T09:18:04.257114378Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 18 09:18:04 newest-cni-444637 crio[522]: time="2025-10-18T09:18:04.257471362Z" level=info msg="Ran pod sandbox e80d3bb3748d96759434acdc5461294811206e767cb05d85c18434752ec8fe38 with infra container: kube-system/kindnet-qmlcq/POD" id=848c943b-9532-4a04-a022-94c7152fc501 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:18:04 newest-cni-444637 crio[522]: time="2025-10-18T09:18:04.257758597Z" level=info msg="Ran pod sandbox eeada871524e733638138564f7e61ec0d8989327c0b848eeaf22b32ddc96c505 with infra container: kube-system/kube-proxy-hbkn5/POD" id=656142e0-a275-4f46-a31c-c456a18983d6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:18:04 newest-cni-444637 crio[522]: time="2025-10-18T09:18:04.258673507Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=2df9a22a-e2a2-4ebc-98e5-7816d9692adc name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:18:04 newest-cni-444637 crio[522]: time="2025-10-18T09:18:04.258694432Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=9b0c18cc-c831-4485-b67a-43f253e83a55 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:18:04 newest-cni-444637 crio[522]: time="2025-10-18T09:18:04.259654532Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=f90145e6-063e-47a6-8d42-012ab3a26095 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:18:04 newest-cni-444637 crio[522]: time="2025-10-18T09:18:04.259757625Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=6c26b868-5a89-428f-9e8a-72a7ae83b07e name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:18:04 newest-cni-444637 crio[522]: time="2025-10-18T09:18:04.260831036Z" level=info msg="Creating container: kube-system/kube-proxy-hbkn5/kube-proxy" id=d57a90b0-84f3-4272-9d1d-193da7f7cbe3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:18:04 newest-cni-444637 crio[522]: time="2025-10-18T09:18:04.260831512Z" level=info msg="Creating container: kube-system/kindnet-qmlcq/kindnet-cni" id=e52d46e8-85cd-486d-9a6f-f5fbd44106d1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:18:04 newest-cni-444637 crio[522]: time="2025-10-18T09:18:04.261116527Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:18:04 newest-cni-444637 crio[522]: time="2025-10-18T09:18:04.261173955Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:18:04 newest-cni-444637 crio[522]: time="2025-10-18T09:18:04.26514329Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:18:04 newest-cni-444637 crio[522]: time="2025-10-18T09:18:04.265811658Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:18:04 newest-cni-444637 crio[522]: time="2025-10-18T09:18:04.267758698Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:18:04 newest-cni-444637 crio[522]: time="2025-10-18T09:18:04.268249652Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:18:04 newest-cni-444637 crio[522]: time="2025-10-18T09:18:04.296086449Z" level=info msg="Created container 12a571301d0c534347c1c7a177bd731868cad6ff75cd5a1c93af4981287ee430: kube-system/kindnet-qmlcq/kindnet-cni" id=e52d46e8-85cd-486d-9a6f-f5fbd44106d1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:18:04 newest-cni-444637 crio[522]: time="2025-10-18T09:18:04.296835533Z" level=info msg="Starting container: 12a571301d0c534347c1c7a177bd731868cad6ff75cd5a1c93af4981287ee430" id=fd60239c-0828-46ec-a966-9262607d2422 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:18:04 newest-cni-444637 crio[522]: time="2025-10-18T09:18:04.298518763Z" level=info msg="Started container" PID=1047 containerID=12a571301d0c534347c1c7a177bd731868cad6ff75cd5a1c93af4981287ee430 description=kube-system/kindnet-qmlcq/kindnet-cni id=fd60239c-0828-46ec-a966-9262607d2422 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e80d3bb3748d96759434acdc5461294811206e767cb05d85c18434752ec8fe38
	Oct 18 09:18:04 newest-cni-444637 crio[522]: time="2025-10-18T09:18:04.29899744Z" level=info msg="Created container ac55486a499ee462d3e0e111469c5d5af99e91ca5df597256a68ff492f0d410b: kube-system/kube-proxy-hbkn5/kube-proxy" id=d57a90b0-84f3-4272-9d1d-193da7f7cbe3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:18:04 newest-cni-444637 crio[522]: time="2025-10-18T09:18:04.299692021Z" level=info msg="Starting container: ac55486a499ee462d3e0e111469c5d5af99e91ca5df597256a68ff492f0d410b" id=8cc43169-e9a2-4844-a466-bfa67a71bb1c name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:18:04 newest-cni-444637 crio[522]: time="2025-10-18T09:18:04.302177238Z" level=info msg="Started container" PID=1048 containerID=ac55486a499ee462d3e0e111469c5d5af99e91ca5df597256a68ff492f0d410b description=kube-system/kube-proxy-hbkn5/kube-proxy id=8cc43169-e9a2-4844-a466-bfa67a71bb1c name=/runtime.v1.RuntimeService/StartContainer sandboxID=eeada871524e733638138564f7e61ec0d8989327c0b848eeaf22b32ddc96c505
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	ac55486a499ee       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   4 seconds ago       Running             kube-proxy                1                   eeada871524e7       kube-proxy-hbkn5                            kube-system
	12a571301d0c5       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   4 seconds ago       Running             kindnet-cni               1                   e80d3bb3748d9       kindnet-qmlcq                               kube-system
	014aa61b2a700       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   7 seconds ago       Running             kube-controller-manager   1                   7abad8abfc557       kube-controller-manager-newest-cni-444637   kube-system
	b91ae2df424fd       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   7 seconds ago       Running             kube-scheduler            1                   fcc67ee54b2fb       kube-scheduler-newest-cni-444637            kube-system
	49cf2e65f5a68       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   7 seconds ago       Running             kube-apiserver            1                   1ec6a44610fc3       kube-apiserver-newest-cni-444637            kube-system
	390882244d272       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   7 seconds ago       Running             etcd                      1                   51225c5a226d9       etcd-newest-cni-444637                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-444637
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-444637
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a39cecdc22b5fb611b15c7501c7459c3b4d2820
	                    minikube.k8s.io/name=newest-cni-444637
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T09_17_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 09:17:28 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-444637
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 09:18:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 09:18:03 +0000   Sat, 18 Oct 2025 09:17:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 09:18:03 +0000   Sat, 18 Oct 2025 09:17:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 09:18:03 +0000   Sat, 18 Oct 2025 09:17:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 18 Oct 2025 09:18:03 +0000   Sat, 18 Oct 2025 09:17:25 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-444637
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                c20f4ce8-6abc-49e6-9924-f27306703b2d
	  Boot ID:                    e8d7ef1f-87bb-488c-8381-e18fe85b484f
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-444637                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         39s
	  kube-system                 kindnet-qmlcq                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      32s
	  kube-system                 kube-apiserver-newest-cni-444637             250m (3%)     0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-controller-manager-newest-cni-444637    200m (2%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-proxy-hbkn5                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-scheduler-newest-cni-444637             100m (1%)     0 (0%)      0 (0%)           0 (0%)         37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 30s                kube-proxy       
	  Normal  Starting                 4s                 kube-proxy       
	  Normal  Starting                 44s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  43s (x8 over 43s)  kubelet          Node newest-cni-444637 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    43s (x8 over 43s)  kubelet          Node newest-cni-444637 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     43s (x8 over 43s)  kubelet          Node newest-cni-444637 status is now: NodeHasSufficientPID
	  Normal  Starting                 37s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  37s                kubelet          Node newest-cni-444637 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    37s                kubelet          Node newest-cni-444637 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     37s                kubelet          Node newest-cni-444637 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           32s                node-controller  Node newest-cni-444637 event: Registered Node newest-cni-444637 in Controller
	  Normal  RegisteredNode           2s                 node-controller  Node newest-cni-444637 event: Registered Node newest-cni-444637 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3e 73 e0 1e 4f 5c 08 06
	[  +0.001176] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9e 01 6a be c1 ed 08 06
	[  +1.096145] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 92 07 d0 c5 bc 08 06
	[  +0.000393] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff de 8d 0a a3 cc 78 08 06
	[ +17.591772] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 8a 16 36 e8 43 c0 08 06
	[  +0.000413] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3e 73 e0 1e 4f 5c 08 06
	[ +11.820741] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 36 5b 8e 46 ea 47 08 06
	[Oct18 09:15] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 28 86 11 d4 e9 08 06
	[  +0.032974] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 76 2d 83 26 2e 28 08 06
	[  +4.435535] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 22 e2 07 5a 3b 4a 08 06
	[  +0.000382] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 36 5b 8e 46 ea 47 08 06
	[ +43.809014] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 22 6f 4b 2b 7f 46 08 06
	[  +0.000367] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 62 28 86 11 d4 e9 08 06
	
	
	==> etcd [390882244d27208c7b2d7d0538a0ff970ed197d0a63b391f3e1c81bd7b8255df] <==
	{"level":"warn","ts":"2025-10-18T09:18:02.711259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:18:02.718770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:18:02.726416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:18:02.747042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:18:02.755591Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:18:02.762394Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:18:02.768987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:18:02.776336Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:18:02.791430Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:18:02.800015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:18:02.808506Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:18:02.816110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:18:02.822477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:18:02.829106Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:18:02.835939Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:18:02.843845Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:18:02.851143Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:18:02.858726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:18:02.866259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:18:02.873644Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:18:02.881412Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:18:02.903432Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:18:02.910197Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:18:02.916797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:18:02.974545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56474","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:18:09 up  1:00,  0 user,  load average: 4.26, 3.75, 2.55
	Linux newest-cni-444637 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [12a571301d0c534347c1c7a177bd731868cad6ff75cd5a1c93af4981287ee430] <==
	I1018 09:18:04.489959       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 09:18:04.583719       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1018 09:18:04.583896       1 main.go:148] setting mtu 1500 for CNI 
	I1018 09:18:04.583914       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 09:18:04.583943       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T09:18:04Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 09:18:04.785768       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 09:18:04.785808       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 09:18:04.785823       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 09:18:04.785991       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 09:18:05.086556       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 09:18:05.086600       1 metrics.go:72] Registering metrics
	I1018 09:18:05.086672       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [49cf2e65f5a6801e31db940684d60041512ed73bbf34778abdfd8025afc8b25b] <==
	I1018 09:18:03.452818       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1018 09:18:03.452872       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1018 09:18:03.453050       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1018 09:18:03.453120       1 aggregator.go:171] initial CRD sync complete...
	I1018 09:18:03.453130       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 09:18:03.453136       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 09:18:03.453142       1 cache.go:39] Caches are synced for autoregister controller
	I1018 09:18:03.453154       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1018 09:18:03.453259       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1018 09:18:03.459922       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 09:18:03.463672       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 09:18:03.470497       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:18:03.471523       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 09:18:03.723856       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 09:18:03.761675       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 09:18:03.789096       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 09:18:03.799949       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 09:18:03.808786       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 09:18:03.846529       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.164.42"}
	I1018 09:18:03.859562       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.208.194"}
	I1018 09:18:04.357146       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 09:18:07.077912       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 09:18:07.077969       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 09:18:07.227245       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 09:18:07.276917       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [014aa61b2a700319893c6b11615bd85925597f0738e1bf960657bba99a7ac5ea] <==
	I1018 09:18:06.744228       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 09:18:06.746481       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 09:18:06.749735       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 09:18:06.774192       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 09:18:06.774216       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1018 09:18:06.774239       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1018 09:18:06.774257       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 09:18:06.774278       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 09:18:06.774310       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 09:18:06.774393       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 09:18:06.774739       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 09:18:06.775775       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 09:18:06.775869       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 09:18:06.775961       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 09:18:06.780467       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1018 09:18:06.780523       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 09:18:06.780548       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 09:18:06.780552       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 09:18:06.780557       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 09:18:06.780673       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:18:06.785910       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:18:06.792090       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 09:18:06.796429       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 09:18:06.797618       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 09:18:06.799850       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	
	
	==> kube-proxy [ac55486a499ee462d3e0e111469c5d5af99e91ca5df597256a68ff492f0d410b] <==
	I1018 09:18:04.336151       1 server_linux.go:53] "Using iptables proxy"
	I1018 09:18:04.387210       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 09:18:04.487975       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 09:18:04.488015       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1018 09:18:04.488120       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 09:18:04.511074       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 09:18:04.511148       1 server_linux.go:132] "Using iptables Proxier"
	I1018 09:18:04.517790       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 09:18:04.518152       1 server.go:527] "Version info" version="v1.34.1"
	I1018 09:18:04.518184       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:18:04.519927       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 09:18:04.519958       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 09:18:04.519984       1 config.go:200] "Starting service config controller"
	I1018 09:18:04.519993       1 config.go:106] "Starting endpoint slice config controller"
	I1018 09:18:04.519990       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 09:18:04.520004       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 09:18:04.520559       1 config.go:309] "Starting node config controller"
	I1018 09:18:04.520574       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 09:18:04.520583       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 09:18:04.620166       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 09:18:04.620244       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 09:18:04.620246       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [b91ae2df424fdafe037b2eea7a39a37f80929e5ab4c76c1169ce7ba3b9a4bdbd] <==
	I1018 09:18:02.250613       1 serving.go:386] Generated self-signed cert in-memory
	I1018 09:18:03.773953       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 09:18:03.773982       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:18:03.780390       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 09:18:03.780521       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1018 09:18:03.780536       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1018 09:18:03.780577       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 09:18:03.781619       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:18:03.781639       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:18:03.781660       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 09:18:03.781668       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 09:18:03.881241       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1018 09:18:03.881718       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:18:03.881728       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 18 09:18:02 newest-cni-444637 kubelet[673]: E1018 09:18:02.981497     673 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-444637\" not found" node="newest-cni-444637"
	Oct 18 09:18:02 newest-cni-444637 kubelet[673]: E1018 09:18:02.981637     673 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-444637\" not found" node="newest-cni-444637"
	Oct 18 09:18:02 newest-cni-444637 kubelet[673]: E1018 09:18:02.981796     673 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-444637\" not found" node="newest-cni-444637"
	Oct 18 09:18:03 newest-cni-444637 kubelet[673]: I1018 09:18:03.444145     673 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-444637"
	Oct 18 09:18:03 newest-cni-444637 kubelet[673]: E1018 09:18:03.460032     673 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-444637\" already exists" pod="kube-system/etcd-newest-cni-444637"
	Oct 18 09:18:03 newest-cni-444637 kubelet[673]: I1018 09:18:03.460101     673 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-444637"
	Oct 18 09:18:03 newest-cni-444637 kubelet[673]: E1018 09:18:03.470244     673 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-444637\" already exists" pod="kube-system/kube-apiserver-newest-cni-444637"
	Oct 18 09:18:03 newest-cni-444637 kubelet[673]: I1018 09:18:03.470291     673 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-444637"
	Oct 18 09:18:03 newest-cni-444637 kubelet[673]: I1018 09:18:03.474791     673 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-444637"
	Oct 18 09:18:03 newest-cni-444637 kubelet[673]: I1018 09:18:03.474884     673 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-444637"
	Oct 18 09:18:03 newest-cni-444637 kubelet[673]: I1018 09:18:03.474921     673 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 18 09:18:03 newest-cni-444637 kubelet[673]: I1018 09:18:03.476048     673 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 18 09:18:03 newest-cni-444637 kubelet[673]: E1018 09:18:03.478386     673 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-444637\" already exists" pod="kube-system/kube-controller-manager-newest-cni-444637"
	Oct 18 09:18:03 newest-cni-444637 kubelet[673]: I1018 09:18:03.478469     673 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-444637"
	Oct 18 09:18:03 newest-cni-444637 kubelet[673]: E1018 09:18:03.484083     673 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-444637\" already exists" pod="kube-system/kube-scheduler-newest-cni-444637"
	Oct 18 09:18:03 newest-cni-444637 kubelet[673]: I1018 09:18:03.940325     673 apiserver.go:52] "Watching apiserver"
	Oct 18 09:18:03 newest-cni-444637 kubelet[673]: I1018 09:18:03.970759     673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d70417da-43f2-4d8c-a088-07cea5225c34-xtables-lock\") pod \"kube-proxy-hbkn5\" (UID: \"d70417da-43f2-4d8c-a088-07cea5225c34\") " pod="kube-system/kube-proxy-hbkn5"
	Oct 18 09:18:03 newest-cni-444637 kubelet[673]: I1018 09:18:03.970826     673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d70417da-43f2-4d8c-a088-07cea5225c34-lib-modules\") pod \"kube-proxy-hbkn5\" (UID: \"d70417da-43f2-4d8c-a088-07cea5225c34\") " pod="kube-system/kube-proxy-hbkn5"
	Oct 18 09:18:04 newest-cni-444637 kubelet[673]: I1018 09:18:04.045553     673 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 18 09:18:04 newest-cni-444637 kubelet[673]: I1018 09:18:04.071671     673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/2c82849a-5511-43a1-a300-a7f46df288ec-cni-cfg\") pod \"kindnet-qmlcq\" (UID: \"2c82849a-5511-43a1-a300-a7f46df288ec\") " pod="kube-system/kindnet-qmlcq"
	Oct 18 09:18:04 newest-cni-444637 kubelet[673]: I1018 09:18:04.071875     673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2c82849a-5511-43a1-a300-a7f46df288ec-xtables-lock\") pod \"kindnet-qmlcq\" (UID: \"2c82849a-5511-43a1-a300-a7f46df288ec\") " pod="kube-system/kindnet-qmlcq"
	Oct 18 09:18:04 newest-cni-444637 kubelet[673]: I1018 09:18:04.071920     673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2c82849a-5511-43a1-a300-a7f46df288ec-lib-modules\") pod \"kindnet-qmlcq\" (UID: \"2c82849a-5511-43a1-a300-a7f46df288ec\") " pod="kube-system/kindnet-qmlcq"
	Oct 18 09:18:05 newest-cni-444637 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 09:18:05 newest-cni-444637 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 09:18:05 newest-cni-444637 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-444637 -n newest-cni-444637
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-444637 -n newest-cni-444637: exit status 2 (326.492693ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-444637 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-gc5dd storage-provisioner dashboard-metrics-scraper-6ffb444bf9-94qsp kubernetes-dashboard-855c9754f9-g5zw6
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-444637 describe pod coredns-66bc5c9577-gc5dd storage-provisioner dashboard-metrics-scraper-6ffb444bf9-94qsp kubernetes-dashboard-855c9754f9-g5zw6
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-444637 describe pod coredns-66bc5c9577-gc5dd storage-provisioner dashboard-metrics-scraper-6ffb444bf9-94qsp kubernetes-dashboard-855c9754f9-g5zw6: exit status 1 (65.471988ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-gc5dd" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-94qsp" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-g5zw6" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-444637 describe pod coredns-66bc5c9577-gc5dd storage-provisioner dashboard-metrics-scraper-6ffb444bf9-94qsp kubernetes-dashboard-855c9754f9-g5zw6: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-444637
helpers_test.go:243: (dbg) docker inspect newest-cni-444637:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "891566b377adcaa9cc2816d33e76914e19937c48e9ad4928e80005a493fd9941",
	        "Created": "2025-10-18T09:17:13.777714578Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 330393,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T09:17:54.667607343Z",
	            "FinishedAt": "2025-10-18T09:17:53.054918167Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/891566b377adcaa9cc2816d33e76914e19937c48e9ad4928e80005a493fd9941/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/891566b377adcaa9cc2816d33e76914e19937c48e9ad4928e80005a493fd9941/hostname",
	        "HostsPath": "/var/lib/docker/containers/891566b377adcaa9cc2816d33e76914e19937c48e9ad4928e80005a493fd9941/hosts",
	        "LogPath": "/var/lib/docker/containers/891566b377adcaa9cc2816d33e76914e19937c48e9ad4928e80005a493fd9941/891566b377adcaa9cc2816d33e76914e19937c48e9ad4928e80005a493fd9941-json.log",
	        "Name": "/newest-cni-444637",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-444637:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-444637",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "891566b377adcaa9cc2816d33e76914e19937c48e9ad4928e80005a493fd9941",
	                "LowerDir": "/var/lib/docker/overlay2/299ed2d3b858283b2c8206fda315e99ed5d127ab10dbdaecabdcb4955ace8dbc-init/diff:/var/lib/docker/overlay2/76f783f469ac4c930bc111d7df4bd2b3a57bdcd762971c7ce0ba7a7b959771a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/299ed2d3b858283b2c8206fda315e99ed5d127ab10dbdaecabdcb4955ace8dbc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/299ed2d3b858283b2c8206fda315e99ed5d127ab10dbdaecabdcb4955ace8dbc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/299ed2d3b858283b2c8206fda315e99ed5d127ab10dbdaecabdcb4955ace8dbc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-444637",
	                "Source": "/var/lib/docker/volumes/newest-cni-444637/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-444637",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-444637",
	                "name.minikube.sigs.k8s.io": "newest-cni-444637",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6eade8f3ef5f4aab3a7759e779f30593ae1de7dbe971ffea92f10612f0f06184",
	            "SandboxKey": "/var/run/docker/netns/6eade8f3ef5f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-444637": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:af:6f:0f:e5:bb",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "dd9c4a8b133b5630e17e447400d74046fdd59b021f81b3128919b2fa8ae8dbbe",
	                    "EndpointID": "0fd6cda962ca3b0dfbdcf9d50b3a275aaa14b1dba26c704c76ba71db05464b91",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-444637",
	                        "891566b377ad"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-444637 -n newest-cni-444637
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-444637 -n newest-cni-444637: exit status 2 (323.725329ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-444637 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p no-preload-031066 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-031066            │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ start   │ -p no-preload-031066 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-031066            │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:17 UTC │
	│ addons  │ enable metrics-server -p embed-certs-880603 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-880603           │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │                     │
	│ stop    │ -p embed-certs-880603 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-880603           │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:17 UTC │
	│ image   │ old-k8s-version-951975 image list --format=json                                                                                                                                                                                               │ old-k8s-version-951975       │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ pause   │ -p old-k8s-version-951975 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-951975       │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │                     │
	│ delete  │ -p old-k8s-version-951975                                                                                                                                                                                                                     │ old-k8s-version-951975       │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ delete  │ -p old-k8s-version-951975                                                                                                                                                                                                                     │ old-k8s-version-951975       │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ start   │ -p newest-cni-444637 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-444637            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-986220 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-986220 │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-880603 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-880603           │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ start   │ -p embed-certs-880603 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-880603           │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ stop    │ -p default-k8s-diff-port-986220 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-986220 │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ image   │ no-preload-031066 image list --format=json                                                                                                                                                                                                    │ no-preload-031066            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ pause   │ -p no-preload-031066 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-031066            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-986220 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-986220 │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ start   │ -p default-k8s-diff-port-986220 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-986220 │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │                     │
	│ delete  │ -p no-preload-031066                                                                                                                                                                                                                          │ no-preload-031066            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ delete  │ -p no-preload-031066                                                                                                                                                                                                                          │ no-preload-031066            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ addons  │ enable metrics-server -p newest-cni-444637 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-444637            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │                     │
	│ stop    │ -p newest-cni-444637 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-444637            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ addons  │ enable dashboard -p newest-cni-444637 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-444637            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ start   │ -p newest-cni-444637 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-444637            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:18 UTC │
	│ image   │ newest-cni-444637 image list --format=json                                                                                                                                                                                                    │ newest-cni-444637            │ jenkins │ v1.37.0 │ 18 Oct 25 09:18 UTC │ 18 Oct 25 09:18 UTC │
	│ pause   │ -p newest-cni-444637 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-444637            │ jenkins │ v1.37.0 │ 18 Oct 25 09:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 09:17:54
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 09:17:54.427005  330193 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:17:54.427270  330193 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:17:54.427281  330193 out.go:374] Setting ErrFile to fd 2...
	I1018 09:17:54.427287  330193 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:17:54.427525  330193 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-5897/.minikube/bin
	I1018 09:17:54.428050  330193 out.go:368] Setting JSON to false
	I1018 09:17:54.429280  330193 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3622,"bootTime":1760775452,"procs":312,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 09:17:54.429387  330193 start.go:141] virtualization: kvm guest
	I1018 09:17:54.431635  330193 out.go:179] * [newest-cni-444637] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 09:17:54.432952  330193 notify.go:220] Checking for updates...
	I1018 09:17:54.432979  330193 out.go:179]   - MINIKUBE_LOCATION=21767
	I1018 09:17:54.434488  330193 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:17:54.435897  330193 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-5897/kubeconfig
	I1018 09:17:54.437111  330193 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-5897/.minikube
	I1018 09:17:54.438264  330193 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 09:17:54.439545  330193 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:17:54.441204  330193 config.go:182] Loaded profile config "newest-cni-444637": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:17:54.441727  330193 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:17:54.467746  330193 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 09:17:54.467827  330193 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:17:54.527403  330193 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:66 SystemTime:2025-10-18 09:17:54.515566485 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:17:54.527559  330193 docker.go:318] overlay module found
	I1018 09:17:54.529436  330193 out.go:179] * Using the docker driver based on existing profile
	I1018 09:17:54.530557  330193 start.go:305] selected driver: docker
	I1018 09:17:54.530578  330193 start.go:925] validating driver "docker" against &{Name:newest-cni-444637 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-444637 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:17:54.530680  330193 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:17:54.531357  330193 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:17:54.591156  330193 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:66 SystemTime:2025-10-18 09:17:54.580755477 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:17:54.591532  330193 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 09:17:54.591566  330193 cni.go:84] Creating CNI manager for ""
	I1018 09:17:54.591617  330193 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:17:54.591683  330193 start.go:349] cluster config:
	{Name:newest-cni-444637 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-444637 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:17:54.593449  330193 out.go:179] * Starting "newest-cni-444637" primary control-plane node in "newest-cni-444637" cluster
	I1018 09:17:54.594724  330193 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 09:17:54.596122  330193 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 09:17:54.597292  330193 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:17:54.597335  330193 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-5897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 09:17:54.597376  330193 cache.go:58] Caching tarball of preloaded images
	I1018 09:17:54.597366  330193 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 09:17:54.597499  330193 preload.go:233] Found /home/jenkins/minikube-integration/21767-5897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 09:17:54.597519  330193 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 09:17:54.597628  330193 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/config.json ...
	I1018 09:17:54.619906  330193 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 09:17:54.619924  330193 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 09:17:54.619939  330193 cache.go:232] Successfully downloaded all kic artifacts
	I1018 09:17:54.619961  330193 start.go:360] acquireMachinesLock for newest-cni-444637: {Name:mkf6974ca6fc7b22cdf212b383f50d3f090ea59b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:17:54.620020  330193 start.go:364] duration metric: took 43.166µs to acquireMachinesLock for "newest-cni-444637"
	I1018 09:17:54.620037  330193 start.go:96] Skipping create...Using existing machine configuration
	I1018 09:17:54.620042  330193 fix.go:54] fixHost starting: 
	I1018 09:17:54.620234  330193 cli_runner.go:164] Run: docker container inspect newest-cni-444637 --format={{.State.Status}}
	I1018 09:17:54.638627  330193 fix.go:112] recreateIfNeeded on newest-cni-444637: state=Stopped err=<nil>
	W1018 09:17:54.638652  330193 fix.go:138] unexpected machine state, will restart: <nil>
	W1018 09:17:51.833553  318609 pod_ready.go:104] pod "coredns-66bc5c9577-7fnw7" is not "Ready", error: <nil>
	W1018 09:17:53.833757  318609 pod_ready.go:104] pod "coredns-66bc5c9577-7fnw7" is not "Ready", error: <nil>
	W1018 09:17:56.034833  324191 pod_ready.go:104] pod "coredns-66bc5c9577-bpcsk" is not "Ready", error: <nil>
	W1018 09:17:58.534991  324191 pod_ready.go:104] pod "coredns-66bc5c9577-bpcsk" is not "Ready", error: <nil>
	I1018 09:17:54.640543  330193 out.go:252] * Restarting existing docker container for "newest-cni-444637" ...
	I1018 09:17:54.640644  330193 cli_runner.go:164] Run: docker start newest-cni-444637
	I1018 09:17:54.903916  330193 cli_runner.go:164] Run: docker container inspect newest-cni-444637 --format={{.State.Status}}
	I1018 09:17:54.923445  330193 kic.go:430] container "newest-cni-444637" state is running.
	I1018 09:17:54.923919  330193 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-444637
	I1018 09:17:54.944878  330193 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/config.json ...
	I1018 09:17:54.945143  330193 machine.go:93] provisionDockerMachine start ...
	I1018 09:17:54.945221  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:17:54.965135  330193 main.go:141] libmachine: Using SSH client type: native
	I1018 09:17:54.965422  330193 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1018 09:17:54.965438  330193 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:17:54.966008  330193 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59674->127.0.0.1:33133: read: connection reset by peer
	I1018 09:17:58.102821  330193 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-444637
	
	I1018 09:17:58.102846  330193 ubuntu.go:182] provisioning hostname "newest-cni-444637"
	I1018 09:17:58.102902  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:17:58.121992  330193 main.go:141] libmachine: Using SSH client type: native
	I1018 09:17:58.122251  330193 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1018 09:17:58.122274  330193 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-444637 && echo "newest-cni-444637" | sudo tee /etc/hostname
	I1018 09:17:58.271611  330193 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-444637
	
	I1018 09:17:58.271696  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:17:58.295116  330193 main.go:141] libmachine: Using SSH client type: native
	I1018 09:17:58.295331  330193 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1018 09:17:58.295366  330193 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-444637' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-444637/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-444637' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:17:58.435338  330193 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:17:58.435406  330193 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-5897/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-5897/.minikube}
	I1018 09:17:58.435457  330193 ubuntu.go:190] setting up certificates
	I1018 09:17:58.435470  330193 provision.go:84] configureAuth start
	I1018 09:17:58.435550  330193 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-444637
	I1018 09:17:58.454683  330193 provision.go:143] copyHostCerts
	I1018 09:17:58.454758  330193 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem, removing ...
	I1018 09:17:58.454789  330193 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem
	I1018 09:17:58.454878  330193 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem (1675 bytes)
	I1018 09:17:58.455021  330193 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem, removing ...
	I1018 09:17:58.455032  330193 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem
	I1018 09:17:58.455077  330193 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem (1078 bytes)
	I1018 09:17:58.455176  330193 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem, removing ...
	I1018 09:17:58.455185  330193 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem
	I1018 09:17:58.455229  330193 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem (1123 bytes)
	I1018 09:17:58.455323  330193 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem org=jenkins.newest-cni-444637 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-444637]
	I1018 09:17:58.651717  330193 provision.go:177] copyRemoteCerts
	I1018 09:17:58.651791  330193 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:17:58.651850  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:17:58.670990  330193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/newest-cni-444637/id_rsa Username:docker}
	I1018 09:17:58.769295  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:17:58.788403  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 09:17:58.807495  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 09:17:58.826308  330193 provision.go:87] duration metric: took 390.822036ms to configureAuth
	I1018 09:17:58.826335  330193 ubuntu.go:206] setting minikube options for container-runtime
	I1018 09:17:58.826534  330193 config.go:182] Loaded profile config "newest-cni-444637": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:17:58.826624  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:17:58.845940  330193 main.go:141] libmachine: Using SSH client type: native
	I1018 09:17:58.846169  330193 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1018 09:17:58.846191  330193 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:17:59.117215  330193 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:17:59.117238  330193 machine.go:96] duration metric: took 4.172078969s to provisionDockerMachine
	I1018 09:17:59.117253  330193 start.go:293] postStartSetup for "newest-cni-444637" (driver="docker")
	I1018 09:17:59.117266  330193 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:17:59.117338  330193 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:17:59.117401  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:17:59.136996  330193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/newest-cni-444637/id_rsa Username:docker}
	I1018 09:17:59.235549  330193 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:17:59.239452  330193 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 09:17:59.239483  330193 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 09:17:59.239505  330193 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-5897/.minikube/addons for local assets ...
	I1018 09:17:59.239563  330193 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-5897/.minikube/files for local assets ...
	I1018 09:17:59.239658  330193 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem -> 93942.pem in /etc/ssl/certs
	I1018 09:17:59.239788  330193 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:17:59.248379  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem --> /etc/ssl/certs/93942.pem (1708 bytes)
	I1018 09:17:59.268012  330193 start.go:296] duration metric: took 150.737252ms for postStartSetup
	I1018 09:17:59.268099  330193 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:17:59.268146  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:17:59.287401  330193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/newest-cni-444637/id_rsa Username:docker}
	I1018 09:17:59.382795  330193 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 09:17:59.388305  330193 fix.go:56] duration metric: took 4.768253133s for fixHost
	I1018 09:17:59.388338  330193 start.go:83] releasing machines lock for "newest-cni-444637", held for 4.76830641s
	I1018 09:17:59.388481  330193 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-444637
	I1018 09:17:59.407756  330193 ssh_runner.go:195] Run: cat /version.json
	I1018 09:17:59.407798  330193 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:17:59.407876  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:17:59.407803  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	W1018 09:17:56.333478  318609 pod_ready.go:104] pod "coredns-66bc5c9577-7fnw7" is not "Ready", error: <nil>
	I1018 09:17:58.333556  318609 pod_ready.go:94] pod "coredns-66bc5c9577-7fnw7" is "Ready"
	I1018 09:17:58.333585  318609 pod_ready.go:86] duration metric: took 36.506179321s for pod "coredns-66bc5c9577-7fnw7" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:58.336410  318609 pod_ready.go:83] waiting for pod "etcd-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:58.341932  318609 pod_ready.go:94] pod "etcd-embed-certs-880603" is "Ready"
	I1018 09:17:58.341964  318609 pod_ready.go:86] duration metric: took 5.525225ms for pod "etcd-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:58.344669  318609 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:58.349852  318609 pod_ready.go:94] pod "kube-apiserver-embed-certs-880603" is "Ready"
	I1018 09:17:58.349882  318609 pod_ready.go:86] duration metric: took 5.170321ms for pod "kube-apiserver-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:58.352067  318609 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:58.532002  318609 pod_ready.go:94] pod "kube-controller-manager-embed-certs-880603" is "Ready"
	I1018 09:17:58.532034  318609 pod_ready.go:86] duration metric: took 179.946406ms for pod "kube-controller-manager-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:58.732243  318609 pod_ready.go:83] waiting for pod "kube-proxy-k4kcs" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:59.131632  318609 pod_ready.go:94] pod "kube-proxy-k4kcs" is "Ready"
	I1018 09:17:59.131665  318609 pod_ready.go:86] duration metric: took 399.394452ms for pod "kube-proxy-k4kcs" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:59.332088  318609 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:59.734805  318609 pod_ready.go:94] pod "kube-scheduler-embed-certs-880603" is "Ready"
	I1018 09:17:59.734842  318609 pod_ready.go:86] duration metric: took 402.724813ms for pod "kube-scheduler-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:59.734856  318609 pod_ready.go:40] duration metric: took 37.912005765s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:17:59.783224  318609 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 09:17:59.785136  318609 out.go:179] * Done! kubectl is now configured to use "embed-certs-880603" cluster and "default" namespace by default
	I1018 09:17:59.428145  330193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/newest-cni-444637/id_rsa Username:docker}
	I1018 09:17:59.430455  330193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/newest-cni-444637/id_rsa Username:docker}
	I1018 09:17:59.580030  330193 ssh_runner.go:195] Run: systemctl --version
	I1018 09:17:59.587085  330193 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:17:59.625510  330193 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:17:59.630784  330193 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:17:59.630846  330193 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:17:59.639622  330193 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 09:17:59.639650  330193 start.go:495] detecting cgroup driver to use...
	I1018 09:17:59.639695  330193 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 09:17:59.639752  330193 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:17:59.654825  330193 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:17:59.668280  330193 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:17:59.668366  330193 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:17:59.683973  330193 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:17:59.698385  330193 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:17:59.790586  330193 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:17:59.892076  330193 docker.go:234] disabling docker service ...
	I1018 09:17:59.892147  330193 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:17:59.908881  330193 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:17:59.922861  330193 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:18:00.012767  330193 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:18:00.112051  330193 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:18:00.125686  330193 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:18:00.142184  330193 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 09:18:00.142248  330193 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:18:00.153446  330193 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 09:18:00.153510  330193 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:18:00.163772  330193 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:18:00.173529  330193 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:18:00.183180  330193 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:18:00.192357  330193 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:18:00.202160  330193 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:18:00.211313  330193 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:18:00.221003  330193 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:18:00.229269  330193 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:18:00.238137  330193 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:18:00.320620  330193 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:18:00.435033  330193 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:18:00.435106  330193 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:18:00.439539  330193 start.go:563] Will wait 60s for crictl version
	I1018 09:18:00.439606  330193 ssh_runner.go:195] Run: which crictl
	I1018 09:18:00.443682  330193 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 09:18:00.469987  330193 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 09:18:00.470070  330193 ssh_runner.go:195] Run: crio --version
	I1018 09:18:00.500186  330193 ssh_runner.go:195] Run: crio --version
	I1018 09:18:00.531772  330193 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 09:18:00.533155  330193 cli_runner.go:164] Run: docker network inspect newest-cni-444637 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:18:00.552284  330193 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1018 09:18:00.556833  330193 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:18:00.569469  330193 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1018 09:18:00.570643  330193 kubeadm.go:883] updating cluster {Name:newest-cni-444637 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-444637 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:18:00.570761  330193 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:18:00.570826  330193 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:18:00.604611  330193 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:18:00.604633  330193 crio.go:433] Images already preloaded, skipping extraction
	I1018 09:18:00.604679  330193 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:18:00.632395  330193 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:18:00.632438  330193 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:18:00.632446  330193 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1018 09:18:00.632555  330193 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-444637 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-444637 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:18:00.632630  330193 ssh_runner.go:195] Run: crio config
	I1018 09:18:00.683711  330193 cni.go:84] Creating CNI manager for ""
	I1018 09:18:00.683732  330193 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:18:00.683746  330193 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1018 09:18:00.683770  330193 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-444637 NodeName:newest-cni-444637 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:18:00.683897  330193 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-444637"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:18:00.683961  330193 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 09:18:00.693538  330193 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:18:00.693611  330193 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:18:00.701785  330193 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1018 09:18:00.715623  330193 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:18:00.729315  330193 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1018 09:18:00.742706  330193 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1018 09:18:00.746993  330193 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:18:00.758274  330193 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:18:00.846197  330193 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:18:00.874953  330193 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637 for IP: 192.168.103.2
	I1018 09:18:00.874980  330193 certs.go:195] generating shared ca certs ...
	I1018 09:18:00.875000  330193 certs.go:227] acquiring lock for ca certs: {Name:mk550b60d986fbbdf7b5e0015c56234b739f3162 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:18:00.875152  330193 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-5897/.minikube/ca.key
	I1018 09:18:00.875197  330193 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.key
	I1018 09:18:00.875207  330193 certs.go:257] generating profile certs ...
	I1018 09:18:00.875295  330193 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/client.key
	I1018 09:18:00.875391  330193 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/apiserver.key.d9d366ba
	I1018 09:18:00.875439  330193 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/proxy-client.key
	I1018 09:18:00.875557  330193 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/9394.pem (1338 bytes)
	W1018 09:18:00.875586  330193 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-5897/.minikube/certs/9394_empty.pem, impossibly tiny 0 bytes
	I1018 09:18:00.875596  330193 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 09:18:00.875619  330193 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:18:00.875641  330193 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:18:00.875661  330193 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem (1675 bytes)
	I1018 09:18:00.875704  330193 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem (1708 bytes)
	I1018 09:18:00.876245  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:18:00.896645  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 09:18:00.916475  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:18:00.937413  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 09:18:00.962164  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1018 09:18:00.982149  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 09:18:01.001065  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:18:01.021602  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 09:18:01.041260  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:18:01.060553  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/certs/9394.pem --> /usr/share/ca-certificates/9394.pem (1338 bytes)
	I1018 09:18:01.080521  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem --> /usr/share/ca-certificates/93942.pem (1708 bytes)
	I1018 09:18:01.099406  330193 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:18:01.112902  330193 ssh_runner.go:195] Run: openssl version
	I1018 09:18:01.119558  330193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9394.pem && ln -fs /usr/share/ca-certificates/9394.pem /etc/ssl/certs/9394.pem"
	I1018 09:18:01.128761  330193 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9394.pem
	I1018 09:18:01.133075  330193 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 08:35 /usr/share/ca-certificates/9394.pem
	I1018 09:18:01.133130  330193 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9394.pem
	I1018 09:18:01.169581  330193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9394.pem /etc/ssl/certs/51391683.0"
	I1018 09:18:01.178326  330193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93942.pem && ln -fs /usr/share/ca-certificates/93942.pem /etc/ssl/certs/93942.pem"
	I1018 09:18:01.187653  330193 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93942.pem
	I1018 09:18:01.191858  330193 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 08:35 /usr/share/ca-certificates/93942.pem
	I1018 09:18:01.191912  330193 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93942.pem
	I1018 09:18:01.227900  330193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93942.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:18:01.236865  330193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:18:01.245974  330193 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:18:01.250554  330193 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:18:01.250615  330193 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:18:01.285905  330193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:18:01.295059  330193 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:18:01.299170  330193 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 09:18:01.334401  330193 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 09:18:01.369411  330193 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 09:18:01.417245  330193 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 09:18:01.463956  330193 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 09:18:01.519260  330193 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 09:18:01.564643  330193 kubeadm.go:400] StartCluster: {Name:newest-cni-444637 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-444637 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:18:01.564725  330193 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:18:01.564799  330193 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:18:01.596025  330193 cri.go:89] found id: "014aa61b2a700319893c6b11615bd85925597f0738e1bf960657bba99a7ac5ea"
	I1018 09:18:01.596053  330193 cri.go:89] found id: "b91ae2df424fdafe037b2eea7a39a37f80929e5ab4c76c1169ce7ba3b9a4bdbd"
	I1018 09:18:01.596059  330193 cri.go:89] found id: "49cf2e65f5a6801e31db940684d60041512ed73bbf34778abdfd8025afc8b25b"
	I1018 09:18:01.596064  330193 cri.go:89] found id: "390882244d27208c7b2d7d0538a0ff970ed197d0a63b391f3e1c81bd7b8255df"
	I1018 09:18:01.596069  330193 cri.go:89] found id: ""
	I1018 09:18:01.596114  330193 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 09:18:01.609602  330193 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:18:01Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:18:01.609687  330193 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:18:01.619278  330193 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 09:18:01.619297  330193 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 09:18:01.619376  330193 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 09:18:01.628525  330193 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 09:18:01.629710  330193 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-444637" does not appear in /home/jenkins/minikube-integration/21767-5897/kubeconfig
	I1018 09:18:01.630508  330193 kubeconfig.go:62] /home/jenkins/minikube-integration/21767-5897/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-444637" cluster setting kubeconfig missing "newest-cni-444637" context setting]
	I1018 09:18:01.631708  330193 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/kubeconfig: {Name:mkbdf22e0f6c3f9f36e4cde3352b620d43ace448 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:18:01.633868  330193 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 09:18:01.643225  330193 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.103.2
	I1018 09:18:01.643268  330193 kubeadm.go:601] duration metric: took 23.964839ms to restartPrimaryControlPlane
	I1018 09:18:01.643282  330193 kubeadm.go:402] duration metric: took 78.647978ms to StartCluster
	I1018 09:18:01.643303  330193 settings.go:142] acquiring lock: {Name:mk177870d6cf7000f95346d8b9c104ade730278a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:18:01.643398  330193 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-5897/kubeconfig
	I1018 09:18:01.645409  330193 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/kubeconfig: {Name:mkbdf22e0f6c3f9f36e4cde3352b620d43ace448 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:18:01.645688  330193 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:18:01.645769  330193 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 09:18:01.645862  330193 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-444637"
	I1018 09:18:01.645882  330193 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-444637"
	W1018 09:18:01.645893  330193 addons.go:247] addon storage-provisioner should already be in state true
	I1018 09:18:01.645893  330193 addons.go:69] Setting dashboard=true in profile "newest-cni-444637"
	I1018 09:18:01.645921  330193 host.go:66] Checking if "newest-cni-444637" exists ...
	I1018 09:18:01.645934  330193 addons.go:238] Setting addon dashboard=true in "newest-cni-444637"
	W1018 09:18:01.645945  330193 addons.go:247] addon dashboard should already be in state true
	I1018 09:18:01.645945  330193 config.go:182] Loaded profile config "newest-cni-444637": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:18:01.645948  330193 addons.go:69] Setting default-storageclass=true in profile "newest-cni-444637"
	I1018 09:18:01.645973  330193 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-444637"
	I1018 09:18:01.645980  330193 host.go:66] Checking if "newest-cni-444637" exists ...
	I1018 09:18:01.646303  330193 cli_runner.go:164] Run: docker container inspect newest-cni-444637 --format={{.State.Status}}
	I1018 09:18:01.646463  330193 cli_runner.go:164] Run: docker container inspect newest-cni-444637 --format={{.State.Status}}
	I1018 09:18:01.646481  330193 cli_runner.go:164] Run: docker container inspect newest-cni-444637 --format={{.State.Status}}
	I1018 09:18:01.647698  330193 out.go:179] * Verifying Kubernetes components...
	I1018 09:18:01.649210  330193 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:18:01.673812  330193 addons.go:238] Setting addon default-storageclass=true in "newest-cni-444637"
	W1018 09:18:01.673837  330193 addons.go:247] addon default-storageclass should already be in state true
	I1018 09:18:01.673877  330193 host.go:66] Checking if "newest-cni-444637" exists ...
	I1018 09:18:01.674375  330193 cli_runner.go:164] Run: docker container inspect newest-cni-444637 --format={{.State.Status}}
	I1018 09:18:01.674516  330193 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:18:01.678901  330193 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:18:01.678924  330193 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 09:18:01.678985  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:18:01.679140  330193 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1018 09:18:01.680475  330193 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1018 09:18:01.681672  330193 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1018 09:18:01.681729  330193 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1018 09:18:01.681827  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:18:01.707736  330193 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 09:18:01.707766  330193 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 09:18:01.707826  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:18:01.713270  330193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/newest-cni-444637/id_rsa Username:docker}
	I1018 09:18:01.719016  330193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/newest-cni-444637/id_rsa Username:docker}
	I1018 09:18:01.734187  330193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/newest-cni-444637/id_rsa Username:docker}
	I1018 09:18:01.812631  330193 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:18:01.828229  330193 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:18:01.828317  330193 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:18:01.829858  330193 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:18:01.835854  330193 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1018 09:18:01.835874  330193 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1018 09:18:01.845491  330193 api_server.go:72] duration metric: took 199.769202ms to wait for apiserver process to appear ...
	I1018 09:18:01.845522  330193 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:18:01.845544  330193 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1018 09:18:01.852363  330193 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 09:18:01.854253  330193 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1018 09:18:01.854275  330193 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1018 09:18:01.872324  330193 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1018 09:18:01.872363  330193 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1018 09:18:01.891549  330193 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1018 09:18:01.891576  330193 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1018 09:18:01.910545  330193 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1018 09:18:01.910574  330193 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1018 09:18:01.928312  330193 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1018 09:18:01.928337  330193 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1018 09:18:01.942869  330193 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1018 09:18:01.942897  330193 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1018 09:18:01.957264  330193 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1018 09:18:01.957287  330193 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1018 09:18:01.971834  330193 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 09:18:01.971871  330193 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1018 09:18:01.988808  330193 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 09:18:03.360064  330193 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1018 09:18:03.360099  330193 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1018 09:18:03.360117  330193 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1018 09:18:03.416525  330193 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W1018 09:18:03.416558  330193 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I1018 09:18:03.845768  330193 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1018 09:18:03.850882  330193 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 09:18:03.850913  330193 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 09:18:03.925688  330193 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.095784279s)
	I1018 09:18:03.925778  330193 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.073377378s)
	I1018 09:18:03.925913  330193 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.937061029s)
	I1018 09:18:03.929127  330193 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-444637 addons enable metrics-server
	
	I1018 09:18:03.937380  330193 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1018 09:18:01.035250  324191 pod_ready.go:104] pod "coredns-66bc5c9577-bpcsk" is not "Ready", error: <nil>
	W1018 09:18:03.035670  324191 pod_ready.go:104] pod "coredns-66bc5c9577-bpcsk" is not "Ready", error: <nil>
	I1018 09:18:03.938934  330193 addons.go:514] duration metric: took 2.293172614s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1018 09:18:04.346493  330193 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1018 09:18:04.351148  330193 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 09:18:04.351178  330193 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 09:18:04.845878  330193 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1018 09:18:04.850252  330193 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1018 09:18:04.851396  330193 api_server.go:141] control plane version: v1.34.1
	I1018 09:18:04.851430  330193 api_server.go:131] duration metric: took 3.005900151s to wait for apiserver health ...
	I1018 09:18:04.851440  330193 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:18:04.855053  330193 system_pods.go:59] 8 kube-system pods found
	I1018 09:18:04.855092  330193 system_pods.go:61] "coredns-66bc5c9577-gc5dd" [7fab8a8d-bdb4-47d4-bf7d-d03341018666] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 09:18:04.855100  330193 system_pods.go:61] "etcd-newest-cni-444637" [b54d61ad-b52d-4343-ba3a-a64b03934319] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 09:18:04.855111  330193 system_pods.go:61] "kindnet-qmlcq" [2c82849a-5511-43a1-a300-a7f46df288ec] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1018 09:18:04.855117  330193 system_pods.go:61] "kube-apiserver-newest-cni-444637" [a9136c1f-8962-45f7-b005-05bd3f856403] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 09:18:04.855124  330193 system_pods.go:61] "kube-controller-manager-newest-cni-444637" [b8d840d7-04c3-495c-aafa-cc8a06e58f06] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 09:18:04.855130  330193 system_pods.go:61] "kube-proxy-hbkn5" [d70417da-43f2-4d8c-a088-07cea5225c34] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1018 09:18:04.855138  330193 system_pods.go:61] "kube-scheduler-newest-cni-444637" [175527c5-4260-4e39-be83-4c36417f3cbe] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 09:18:04.855142  330193 system_pods.go:61] "storage-provisioner" [b0974a78-b6ad-45c3-8241-86f8bb7bc65b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 09:18:04.855151  330193 system_pods.go:74] duration metric: took 3.706424ms to wait for pod list to return data ...
	I1018 09:18:04.855162  330193 default_sa.go:34] waiting for default service account to be created ...
	I1018 09:18:04.857785  330193 default_sa.go:45] found service account: "default"
	I1018 09:18:04.857804  330193 default_sa.go:55] duration metric: took 2.636173ms for default service account to be created ...
	I1018 09:18:04.857817  330193 kubeadm.go:586] duration metric: took 3.212102689s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 09:18:04.857837  330193 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:18:04.860449  330193 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 09:18:04.860472  330193 node_conditions.go:123] node cpu capacity is 8
	I1018 09:18:04.860486  330193 node_conditions.go:105] duration metric: took 2.642504ms to run NodePressure ...
	I1018 09:18:04.860498  330193 start.go:241] waiting for startup goroutines ...
	I1018 09:18:04.860504  330193 start.go:246] waiting for cluster config update ...
	I1018 09:18:04.860514  330193 start.go:255] writing updated cluster config ...
	I1018 09:18:04.860806  330193 ssh_runner.go:195] Run: rm -f paused
	I1018 09:18:04.910604  330193 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 09:18:04.913879  330193 out.go:179] * Done! kubectl is now configured to use "newest-cni-444637" cluster and "default" namespace by default
	W1018 09:18:05.535906  324191 pod_ready.go:104] pod "coredns-66bc5c9577-bpcsk" is not "Ready", error: <nil>
	W1018 09:18:08.034961  324191 pod_ready.go:104] pod "coredns-66bc5c9577-bpcsk" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 18 09:18:04 newest-cni-444637 crio[522]: time="2025-10-18T09:18:04.250643549Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:18:04 newest-cni-444637 crio[522]: time="2025-10-18T09:18:04.255092831Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=848c943b-9532-4a04-a022-94c7152fc501 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:18:04 newest-cni-444637 crio[522]: time="2025-10-18T09:18:04.255757079Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=656142e0-a275-4f46-a31c-c456a18983d6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:18:04 newest-cni-444637 crio[522]: time="2025-10-18T09:18:04.256665873Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 18 09:18:04 newest-cni-444637 crio[522]: time="2025-10-18T09:18:04.257114378Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 18 09:18:04 newest-cni-444637 crio[522]: time="2025-10-18T09:18:04.257471362Z" level=info msg="Ran pod sandbox e80d3bb3748d96759434acdc5461294811206e767cb05d85c18434752ec8fe38 with infra container: kube-system/kindnet-qmlcq/POD" id=848c943b-9532-4a04-a022-94c7152fc501 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:18:04 newest-cni-444637 crio[522]: time="2025-10-18T09:18:04.257758597Z" level=info msg="Ran pod sandbox eeada871524e733638138564f7e61ec0d8989327c0b848eeaf22b32ddc96c505 with infra container: kube-system/kube-proxy-hbkn5/POD" id=656142e0-a275-4f46-a31c-c456a18983d6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:18:04 newest-cni-444637 crio[522]: time="2025-10-18T09:18:04.258673507Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=2df9a22a-e2a2-4ebc-98e5-7816d9692adc name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:18:04 newest-cni-444637 crio[522]: time="2025-10-18T09:18:04.258694432Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=9b0c18cc-c831-4485-b67a-43f253e83a55 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:18:04 newest-cni-444637 crio[522]: time="2025-10-18T09:18:04.259654532Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=f90145e6-063e-47a6-8d42-012ab3a26095 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:18:04 newest-cni-444637 crio[522]: time="2025-10-18T09:18:04.259757625Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=6c26b868-5a89-428f-9e8a-72a7ae83b07e name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:18:04 newest-cni-444637 crio[522]: time="2025-10-18T09:18:04.260831036Z" level=info msg="Creating container: kube-system/kube-proxy-hbkn5/kube-proxy" id=d57a90b0-84f3-4272-9d1d-193da7f7cbe3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:18:04 newest-cni-444637 crio[522]: time="2025-10-18T09:18:04.260831512Z" level=info msg="Creating container: kube-system/kindnet-qmlcq/kindnet-cni" id=e52d46e8-85cd-486d-9a6f-f5fbd44106d1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:18:04 newest-cni-444637 crio[522]: time="2025-10-18T09:18:04.261116527Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:18:04 newest-cni-444637 crio[522]: time="2025-10-18T09:18:04.261173955Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:18:04 newest-cni-444637 crio[522]: time="2025-10-18T09:18:04.26514329Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:18:04 newest-cni-444637 crio[522]: time="2025-10-18T09:18:04.265811658Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:18:04 newest-cni-444637 crio[522]: time="2025-10-18T09:18:04.267758698Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:18:04 newest-cni-444637 crio[522]: time="2025-10-18T09:18:04.268249652Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:18:04 newest-cni-444637 crio[522]: time="2025-10-18T09:18:04.296086449Z" level=info msg="Created container 12a571301d0c534347c1c7a177bd731868cad6ff75cd5a1c93af4981287ee430: kube-system/kindnet-qmlcq/kindnet-cni" id=e52d46e8-85cd-486d-9a6f-f5fbd44106d1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:18:04 newest-cni-444637 crio[522]: time="2025-10-18T09:18:04.296835533Z" level=info msg="Starting container: 12a571301d0c534347c1c7a177bd731868cad6ff75cd5a1c93af4981287ee430" id=fd60239c-0828-46ec-a966-9262607d2422 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:18:04 newest-cni-444637 crio[522]: time="2025-10-18T09:18:04.298518763Z" level=info msg="Started container" PID=1047 containerID=12a571301d0c534347c1c7a177bd731868cad6ff75cd5a1c93af4981287ee430 description=kube-system/kindnet-qmlcq/kindnet-cni id=fd60239c-0828-46ec-a966-9262607d2422 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e80d3bb3748d96759434acdc5461294811206e767cb05d85c18434752ec8fe38
	Oct 18 09:18:04 newest-cni-444637 crio[522]: time="2025-10-18T09:18:04.29899744Z" level=info msg="Created container ac55486a499ee462d3e0e111469c5d5af99e91ca5df597256a68ff492f0d410b: kube-system/kube-proxy-hbkn5/kube-proxy" id=d57a90b0-84f3-4272-9d1d-193da7f7cbe3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:18:04 newest-cni-444637 crio[522]: time="2025-10-18T09:18:04.299692021Z" level=info msg="Starting container: ac55486a499ee462d3e0e111469c5d5af99e91ca5df597256a68ff492f0d410b" id=8cc43169-e9a2-4844-a466-bfa67a71bb1c name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:18:04 newest-cni-444637 crio[522]: time="2025-10-18T09:18:04.302177238Z" level=info msg="Started container" PID=1048 containerID=ac55486a499ee462d3e0e111469c5d5af99e91ca5df597256a68ff492f0d410b description=kube-system/kube-proxy-hbkn5/kube-proxy id=8cc43169-e9a2-4844-a466-bfa67a71bb1c name=/runtime.v1.RuntimeService/StartContainer sandboxID=eeada871524e733638138564f7e61ec0d8989327c0b848eeaf22b32ddc96c505
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	ac55486a499ee       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   6 seconds ago       Running             kube-proxy                1                   eeada871524e7       kube-proxy-hbkn5                            kube-system
	12a571301d0c5       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   6 seconds ago       Running             kindnet-cni               1                   e80d3bb3748d9       kindnet-qmlcq                               kube-system
	014aa61b2a700       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   9 seconds ago       Running             kube-controller-manager   1                   7abad8abfc557       kube-controller-manager-newest-cni-444637   kube-system
	b91ae2df424fd       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   9 seconds ago       Running             kube-scheduler            1                   fcc67ee54b2fb       kube-scheduler-newest-cni-444637            kube-system
	49cf2e65f5a68       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   9 seconds ago       Running             kube-apiserver            1                   1ec6a44610fc3       kube-apiserver-newest-cni-444637            kube-system
	390882244d272       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   9 seconds ago       Running             etcd                      1                   51225c5a226d9       etcd-newest-cni-444637                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-444637
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-444637
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a39cecdc22b5fb611b15c7501c7459c3b4d2820
	                    minikube.k8s.io/name=newest-cni-444637
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T09_17_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 09:17:28 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-444637
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 09:18:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 09:18:03 +0000   Sat, 18 Oct 2025 09:17:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 09:18:03 +0000   Sat, 18 Oct 2025 09:17:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 09:18:03 +0000   Sat, 18 Oct 2025 09:17:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 18 Oct 2025 09:18:03 +0000   Sat, 18 Oct 2025 09:17:25 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-444637
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                c20f4ce8-6abc-49e6-9924-f27306703b2d
	  Boot ID:                    e8d7ef1f-87bb-488c-8381-e18fe85b484f
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-444637                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         41s
	  kube-system                 kindnet-qmlcq                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      34s
	  kube-system                 kube-apiserver-newest-cni-444637             250m (3%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-controller-manager-newest-cni-444637    200m (2%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-proxy-hbkn5                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-scheduler-newest-cni-444637             100m (1%)     0 (0%)      0 (0%)           0 (0%)         39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 32s                kube-proxy       
	  Normal  Starting                 6s                 kube-proxy       
	  Normal  Starting                 46s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  45s (x8 over 45s)  kubelet          Node newest-cni-444637 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    45s (x8 over 45s)  kubelet          Node newest-cni-444637 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     45s (x8 over 45s)  kubelet          Node newest-cni-444637 status is now: NodeHasSufficientPID
	  Normal  Starting                 39s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  39s                kubelet          Node newest-cni-444637 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    39s                kubelet          Node newest-cni-444637 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     39s                kubelet          Node newest-cni-444637 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           34s                node-controller  Node newest-cni-444637 event: Registered Node newest-cni-444637 in Controller
	  Normal  RegisteredNode           4s                 node-controller  Node newest-cni-444637 event: Registered Node newest-cni-444637 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3e 73 e0 1e 4f 5c 08 06
	[  +0.001176] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9e 01 6a be c1 ed 08 06
	[  +1.096145] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 92 07 d0 c5 bc 08 06
	[  +0.000393] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff de 8d 0a a3 cc 78 08 06
	[ +17.591772] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 8a 16 36 e8 43 c0 08 06
	[  +0.000413] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3e 73 e0 1e 4f 5c 08 06
	[ +11.820741] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 36 5b 8e 46 ea 47 08 06
	[Oct18 09:15] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 28 86 11 d4 e9 08 06
	[  +0.032974] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 76 2d 83 26 2e 28 08 06
	[  +4.435535] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 22 e2 07 5a 3b 4a 08 06
	[  +0.000382] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 36 5b 8e 46 ea 47 08 06
	[ +43.809014] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 22 6f 4b 2b 7f 46 08 06
	[  +0.000367] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 62 28 86 11 d4 e9 08 06
	
	
	==> etcd [390882244d27208c7b2d7d0538a0ff970ed197d0a63b391f3e1c81bd7b8255df] <==
	{"level":"warn","ts":"2025-10-18T09:18:02.711259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:18:02.718770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:18:02.726416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:18:02.747042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:18:02.755591Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:18:02.762394Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:18:02.768987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:18:02.776336Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:18:02.791430Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:18:02.800015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:18:02.808506Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:18:02.816110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:18:02.822477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:18:02.829106Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:18:02.835939Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:18:02.843845Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:18:02.851143Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:18:02.858726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:18:02.866259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:18:02.873644Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:18:02.881412Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:18:02.903432Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:18:02.910197Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:18:02.916797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:18:02.974545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56474","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:18:10 up  1:00,  0 user,  load average: 4.26, 3.75, 2.55
	Linux newest-cni-444637 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [12a571301d0c534347c1c7a177bd731868cad6ff75cd5a1c93af4981287ee430] <==
	I1018 09:18:04.489959       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 09:18:04.583719       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1018 09:18:04.583896       1 main.go:148] setting mtu 1500 for CNI 
	I1018 09:18:04.583914       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 09:18:04.583943       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T09:18:04Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 09:18:04.785768       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 09:18:04.785808       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 09:18:04.785823       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 09:18:04.785991       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 09:18:05.086556       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 09:18:05.086600       1 metrics.go:72] Registering metrics
	I1018 09:18:05.086672       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [49cf2e65f5a6801e31db940684d60041512ed73bbf34778abdfd8025afc8b25b] <==
	I1018 09:18:03.452818       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1018 09:18:03.452872       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1018 09:18:03.453050       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1018 09:18:03.453120       1 aggregator.go:171] initial CRD sync complete...
	I1018 09:18:03.453130       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 09:18:03.453136       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 09:18:03.453142       1 cache.go:39] Caches are synced for autoregister controller
	I1018 09:18:03.453154       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1018 09:18:03.453259       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1018 09:18:03.459922       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 09:18:03.463672       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 09:18:03.470497       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:18:03.471523       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 09:18:03.723856       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 09:18:03.761675       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 09:18:03.789096       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 09:18:03.799949       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 09:18:03.808786       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 09:18:03.846529       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.164.42"}
	I1018 09:18:03.859562       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.208.194"}
	I1018 09:18:04.357146       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 09:18:07.077912       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 09:18:07.077969       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 09:18:07.227245       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 09:18:07.276917       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [014aa61b2a700319893c6b11615bd85925597f0738e1bf960657bba99a7ac5ea] <==
	I1018 09:18:06.744228       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 09:18:06.746481       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 09:18:06.749735       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 09:18:06.774192       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 09:18:06.774216       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1018 09:18:06.774239       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1018 09:18:06.774257       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 09:18:06.774278       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 09:18:06.774310       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 09:18:06.774393       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 09:18:06.774739       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 09:18:06.775775       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 09:18:06.775869       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 09:18:06.775961       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 09:18:06.780467       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1018 09:18:06.780523       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 09:18:06.780548       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 09:18:06.780552       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 09:18:06.780557       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 09:18:06.780673       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:18:06.785910       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:18:06.792090       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 09:18:06.796429       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 09:18:06.797618       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 09:18:06.799850       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	
	
	==> kube-proxy [ac55486a499ee462d3e0e111469c5d5af99e91ca5df597256a68ff492f0d410b] <==
	I1018 09:18:04.336151       1 server_linux.go:53] "Using iptables proxy"
	I1018 09:18:04.387210       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 09:18:04.487975       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 09:18:04.488015       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1018 09:18:04.488120       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 09:18:04.511074       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 09:18:04.511148       1 server_linux.go:132] "Using iptables Proxier"
	I1018 09:18:04.517790       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 09:18:04.518152       1 server.go:527] "Version info" version="v1.34.1"
	I1018 09:18:04.518184       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:18:04.519927       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 09:18:04.519958       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 09:18:04.519984       1 config.go:200] "Starting service config controller"
	I1018 09:18:04.519993       1 config.go:106] "Starting endpoint slice config controller"
	I1018 09:18:04.519990       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 09:18:04.520004       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 09:18:04.520559       1 config.go:309] "Starting node config controller"
	I1018 09:18:04.520574       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 09:18:04.520583       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 09:18:04.620166       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 09:18:04.620244       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 09:18:04.620246       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [b91ae2df424fdafe037b2eea7a39a37f80929e5ab4c76c1169ce7ba3b9a4bdbd] <==
	I1018 09:18:02.250613       1 serving.go:386] Generated self-signed cert in-memory
	I1018 09:18:03.773953       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 09:18:03.773982       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:18:03.780390       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 09:18:03.780521       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1018 09:18:03.780536       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1018 09:18:03.780577       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 09:18:03.781619       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:18:03.781639       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:18:03.781660       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 09:18:03.781668       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 09:18:03.881241       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1018 09:18:03.881718       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:18:03.881728       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 18 09:18:02 newest-cni-444637 kubelet[673]: E1018 09:18:02.981497     673 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-444637\" not found" node="newest-cni-444637"
	Oct 18 09:18:02 newest-cni-444637 kubelet[673]: E1018 09:18:02.981637     673 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-444637\" not found" node="newest-cni-444637"
	Oct 18 09:18:02 newest-cni-444637 kubelet[673]: E1018 09:18:02.981796     673 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-444637\" not found" node="newest-cni-444637"
	Oct 18 09:18:03 newest-cni-444637 kubelet[673]: I1018 09:18:03.444145     673 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-444637"
	Oct 18 09:18:03 newest-cni-444637 kubelet[673]: E1018 09:18:03.460032     673 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-444637\" already exists" pod="kube-system/etcd-newest-cni-444637"
	Oct 18 09:18:03 newest-cni-444637 kubelet[673]: I1018 09:18:03.460101     673 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-444637"
	Oct 18 09:18:03 newest-cni-444637 kubelet[673]: E1018 09:18:03.470244     673 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-444637\" already exists" pod="kube-system/kube-apiserver-newest-cni-444637"
	Oct 18 09:18:03 newest-cni-444637 kubelet[673]: I1018 09:18:03.470291     673 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-444637"
	Oct 18 09:18:03 newest-cni-444637 kubelet[673]: I1018 09:18:03.474791     673 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-444637"
	Oct 18 09:18:03 newest-cni-444637 kubelet[673]: I1018 09:18:03.474884     673 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-444637"
	Oct 18 09:18:03 newest-cni-444637 kubelet[673]: I1018 09:18:03.474921     673 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 18 09:18:03 newest-cni-444637 kubelet[673]: I1018 09:18:03.476048     673 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 18 09:18:03 newest-cni-444637 kubelet[673]: E1018 09:18:03.478386     673 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-444637\" already exists" pod="kube-system/kube-controller-manager-newest-cni-444637"
	Oct 18 09:18:03 newest-cni-444637 kubelet[673]: I1018 09:18:03.478469     673 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-444637"
	Oct 18 09:18:03 newest-cni-444637 kubelet[673]: E1018 09:18:03.484083     673 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-444637\" already exists" pod="kube-system/kube-scheduler-newest-cni-444637"
	Oct 18 09:18:03 newest-cni-444637 kubelet[673]: I1018 09:18:03.940325     673 apiserver.go:52] "Watching apiserver"
	Oct 18 09:18:03 newest-cni-444637 kubelet[673]: I1018 09:18:03.970759     673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d70417da-43f2-4d8c-a088-07cea5225c34-xtables-lock\") pod \"kube-proxy-hbkn5\" (UID: \"d70417da-43f2-4d8c-a088-07cea5225c34\") " pod="kube-system/kube-proxy-hbkn5"
	Oct 18 09:18:03 newest-cni-444637 kubelet[673]: I1018 09:18:03.970826     673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d70417da-43f2-4d8c-a088-07cea5225c34-lib-modules\") pod \"kube-proxy-hbkn5\" (UID: \"d70417da-43f2-4d8c-a088-07cea5225c34\") " pod="kube-system/kube-proxy-hbkn5"
	Oct 18 09:18:04 newest-cni-444637 kubelet[673]: I1018 09:18:04.045553     673 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 18 09:18:04 newest-cni-444637 kubelet[673]: I1018 09:18:04.071671     673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/2c82849a-5511-43a1-a300-a7f46df288ec-cni-cfg\") pod \"kindnet-qmlcq\" (UID: \"2c82849a-5511-43a1-a300-a7f46df288ec\") " pod="kube-system/kindnet-qmlcq"
	Oct 18 09:18:04 newest-cni-444637 kubelet[673]: I1018 09:18:04.071875     673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2c82849a-5511-43a1-a300-a7f46df288ec-xtables-lock\") pod \"kindnet-qmlcq\" (UID: \"2c82849a-5511-43a1-a300-a7f46df288ec\") " pod="kube-system/kindnet-qmlcq"
	Oct 18 09:18:04 newest-cni-444637 kubelet[673]: I1018 09:18:04.071920     673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2c82849a-5511-43a1-a300-a7f46df288ec-lib-modules\") pod \"kindnet-qmlcq\" (UID: \"2c82849a-5511-43a1-a300-a7f46df288ec\") " pod="kube-system/kindnet-qmlcq"
	Oct 18 09:18:05 newest-cni-444637 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 09:18:05 newest-cni-444637 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 09:18:05 newest-cni-444637 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-444637 -n newest-cni-444637
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-444637 -n newest-cni-444637: exit status 2 (332.289132ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-444637 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-gc5dd storage-provisioner dashboard-metrics-scraper-6ffb444bf9-94qsp kubernetes-dashboard-855c9754f9-g5zw6
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-444637 describe pod coredns-66bc5c9577-gc5dd storage-provisioner dashboard-metrics-scraper-6ffb444bf9-94qsp kubernetes-dashboard-855c9754f9-g5zw6
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-444637 describe pod coredns-66bc5c9577-gc5dd storage-provisioner dashboard-metrics-scraper-6ffb444bf9-94qsp kubernetes-dashboard-855c9754f9-g5zw6: exit status 1 (75.660357ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-gc5dd" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-94qsp" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-g5zw6" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-444637 describe pod coredns-66bc5c9577-gc5dd storage-provisioner dashboard-metrics-scraper-6ffb444bf9-94qsp kubernetes-dashboard-855c9754f9-g5zw6: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (6.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (6.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-880603 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-880603 --alsologtostderr -v=1: exit status 80 (2.353751131s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-880603 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:18:11.516537  333933 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:18:11.516866  333933 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:18:11.516880  333933 out.go:374] Setting ErrFile to fd 2...
	I1018 09:18:11.516886  333933 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:18:11.517235  333933 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-5897/.minikube/bin
	I1018 09:18:11.517615  333933 out.go:368] Setting JSON to false
	I1018 09:18:11.517671  333933 mustload.go:65] Loading cluster: embed-certs-880603
	I1018 09:18:11.518182  333933 config.go:182] Loaded profile config "embed-certs-880603": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:18:11.518803  333933 cli_runner.go:164] Run: docker container inspect embed-certs-880603 --format={{.State.Status}}
	I1018 09:18:11.539544  333933 host.go:66] Checking if "embed-certs-880603" exists ...
	I1018 09:18:11.539852  333933 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:18:11.605077  333933 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-18 09:18:11.591733387 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:18:11.605790  333933 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-880603 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1018 09:18:11.608590  333933 out.go:179] * Pausing node embed-certs-880603 ... 
	I1018 09:18:11.609710  333933 host.go:66] Checking if "embed-certs-880603" exists ...
	I1018 09:18:11.609964  333933 ssh_runner.go:195] Run: systemctl --version
	I1018 09:18:11.610023  333933 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-880603
	I1018 09:18:11.630372  333933 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/embed-certs-880603/id_rsa Username:docker}
	I1018 09:18:11.729390  333933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:18:11.742388  333933 pause.go:52] kubelet running: true
	I1018 09:18:11.742453  333933 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 09:18:11.926810  333933 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 09:18:11.926896  333933 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 09:18:12.001053  333933 cri.go:89] found id: "8bad8c86d5f84ce4f24ac728a7dc50d9a1b6a8a07e0f88ebe5640d5ce8dd72ef"
	I1018 09:18:12.001088  333933 cri.go:89] found id: "68c43d93bcc08f0db42212289b551dc9b0614da25c6fa8caff073aced341e2bd"
	I1018 09:18:12.001092  333933 cri.go:89] found id: "29561b719e51746ae7b9206a1fb65330b21daa2b29035245490abc4a2b5912d8"
	I1018 09:18:12.001095  333933 cri.go:89] found id: "7642771a96629ecf015c65966266ce95ca17e3edcd86d6a51e666854ab2ddb6f"
	I1018 09:18:12.001098  333933 cri.go:89] found id: "43567ee0750735c42ca3e8a987a5f7de05f91d9cc6c196a312f126a0fb9db347"
	I1018 09:18:12.001101  333933 cri.go:89] found id: "bb56d20e298366d034e8ab343121b80816150a01f06f5e3dcf23656917831fa8"
	I1018 09:18:12.001103  333933 cri.go:89] found id: "299c2f3530014f0f00b7697904d0bbe7e76037825ca7356cddf1e7e8e4cfbd3f"
	I1018 09:18:12.001105  333933 cri.go:89] found id: "0e0ff398f2a3fe03603d026ff3c4c4aa2a99cc70201bf2049eaef07838ab4ad9"
	I1018 09:18:12.001108  333933 cri.go:89] found id: "ec50948aa740993206f6c6b998952e98a6c0c34a9993daeba8381c7072181c67"
	I1018 09:18:12.001113  333933 cri.go:89] found id: "cb9cf0dca6916663ae3bd9589727562f6bce4ea8a5f77c65df3f1a6abcc72b7a"
	I1018 09:18:12.001115  333933 cri.go:89] found id: "5206a326b6863dc499d944fee0a747773134d23171f6cea0eed24802ac4170f1"
	I1018 09:18:12.001119  333933 cri.go:89] found id: ""
	I1018 09:18:12.001155  333933 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:18:12.013510  333933 retry.go:31] will retry after 176.965ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:18:12Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:18:12.190925  333933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:18:12.204181  333933 pause.go:52] kubelet running: false
	I1018 09:18:12.204236  333933 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 09:18:12.338872  333933 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 09:18:12.338979  333933 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 09:18:12.408441  333933 cri.go:89] found id: "8bad8c86d5f84ce4f24ac728a7dc50d9a1b6a8a07e0f88ebe5640d5ce8dd72ef"
	I1018 09:18:12.408464  333933 cri.go:89] found id: "68c43d93bcc08f0db42212289b551dc9b0614da25c6fa8caff073aced341e2bd"
	I1018 09:18:12.408467  333933 cri.go:89] found id: "29561b719e51746ae7b9206a1fb65330b21daa2b29035245490abc4a2b5912d8"
	I1018 09:18:12.408471  333933 cri.go:89] found id: "7642771a96629ecf015c65966266ce95ca17e3edcd86d6a51e666854ab2ddb6f"
	I1018 09:18:12.408474  333933 cri.go:89] found id: "43567ee0750735c42ca3e8a987a5f7de05f91d9cc6c196a312f126a0fb9db347"
	I1018 09:18:12.408477  333933 cri.go:89] found id: "bb56d20e298366d034e8ab343121b80816150a01f06f5e3dcf23656917831fa8"
	I1018 09:18:12.408480  333933 cri.go:89] found id: "299c2f3530014f0f00b7697904d0bbe7e76037825ca7356cddf1e7e8e4cfbd3f"
	I1018 09:18:12.408482  333933 cri.go:89] found id: "0e0ff398f2a3fe03603d026ff3c4c4aa2a99cc70201bf2049eaef07838ab4ad9"
	I1018 09:18:12.408485  333933 cri.go:89] found id: "ec50948aa740993206f6c6b998952e98a6c0c34a9993daeba8381c7072181c67"
	I1018 09:18:12.408504  333933 cri.go:89] found id: "cb9cf0dca6916663ae3bd9589727562f6bce4ea8a5f77c65df3f1a6abcc72b7a"
	I1018 09:18:12.408507  333933 cri.go:89] found id: "5206a326b6863dc499d944fee0a747773134d23171f6cea0eed24802ac4170f1"
	I1018 09:18:12.408509  333933 cri.go:89] found id: ""
	I1018 09:18:12.408548  333933 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:18:12.420613  333933 retry.go:31] will retry after 466.572098ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:18:12Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:18:12.888325  333933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:18:12.902229  333933 pause.go:52] kubelet running: false
	I1018 09:18:12.902291  333933 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 09:18:13.047521  333933 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 09:18:13.047591  333933 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 09:18:13.120774  333933 cri.go:89] found id: "8bad8c86d5f84ce4f24ac728a7dc50d9a1b6a8a07e0f88ebe5640d5ce8dd72ef"
	I1018 09:18:13.120795  333933 cri.go:89] found id: "68c43d93bcc08f0db42212289b551dc9b0614da25c6fa8caff073aced341e2bd"
	I1018 09:18:13.120798  333933 cri.go:89] found id: "29561b719e51746ae7b9206a1fb65330b21daa2b29035245490abc4a2b5912d8"
	I1018 09:18:13.120801  333933 cri.go:89] found id: "7642771a96629ecf015c65966266ce95ca17e3edcd86d6a51e666854ab2ddb6f"
	I1018 09:18:13.120804  333933 cri.go:89] found id: "43567ee0750735c42ca3e8a987a5f7de05f91d9cc6c196a312f126a0fb9db347"
	I1018 09:18:13.120808  333933 cri.go:89] found id: "bb56d20e298366d034e8ab343121b80816150a01f06f5e3dcf23656917831fa8"
	I1018 09:18:13.120822  333933 cri.go:89] found id: "299c2f3530014f0f00b7697904d0bbe7e76037825ca7356cddf1e7e8e4cfbd3f"
	I1018 09:18:13.120824  333933 cri.go:89] found id: "0e0ff398f2a3fe03603d026ff3c4c4aa2a99cc70201bf2049eaef07838ab4ad9"
	I1018 09:18:13.120827  333933 cri.go:89] found id: "ec50948aa740993206f6c6b998952e98a6c0c34a9993daeba8381c7072181c67"
	I1018 09:18:13.120832  333933 cri.go:89] found id: "cb9cf0dca6916663ae3bd9589727562f6bce4ea8a5f77c65df3f1a6abcc72b7a"
	I1018 09:18:13.120834  333933 cri.go:89] found id: "5206a326b6863dc499d944fee0a747773134d23171f6cea0eed24802ac4170f1"
	I1018 09:18:13.120837  333933 cri.go:89] found id: ""
	I1018 09:18:13.120876  333933 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:18:13.133646  333933 retry.go:31] will retry after 418.662356ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:18:13Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:18:13.553294  333933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:18:13.567441  333933 pause.go:52] kubelet running: false
	I1018 09:18:13.567508  333933 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 09:18:13.709590  333933 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 09:18:13.709682  333933 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 09:18:13.785201  333933 cri.go:89] found id: "8bad8c86d5f84ce4f24ac728a7dc50d9a1b6a8a07e0f88ebe5640d5ce8dd72ef"
	I1018 09:18:13.785226  333933 cri.go:89] found id: "68c43d93bcc08f0db42212289b551dc9b0614da25c6fa8caff073aced341e2bd"
	I1018 09:18:13.785233  333933 cri.go:89] found id: "29561b719e51746ae7b9206a1fb65330b21daa2b29035245490abc4a2b5912d8"
	I1018 09:18:13.785237  333933 cri.go:89] found id: "7642771a96629ecf015c65966266ce95ca17e3edcd86d6a51e666854ab2ddb6f"
	I1018 09:18:13.785241  333933 cri.go:89] found id: "43567ee0750735c42ca3e8a987a5f7de05f91d9cc6c196a312f126a0fb9db347"
	I1018 09:18:13.785246  333933 cri.go:89] found id: "bb56d20e298366d034e8ab343121b80816150a01f06f5e3dcf23656917831fa8"
	I1018 09:18:13.785250  333933 cri.go:89] found id: "299c2f3530014f0f00b7697904d0bbe7e76037825ca7356cddf1e7e8e4cfbd3f"
	I1018 09:18:13.785254  333933 cri.go:89] found id: "0e0ff398f2a3fe03603d026ff3c4c4aa2a99cc70201bf2049eaef07838ab4ad9"
	I1018 09:18:13.785258  333933 cri.go:89] found id: "ec50948aa740993206f6c6b998952e98a6c0c34a9993daeba8381c7072181c67"
	I1018 09:18:13.785267  333933 cri.go:89] found id: "cb9cf0dca6916663ae3bd9589727562f6bce4ea8a5f77c65df3f1a6abcc72b7a"
	I1018 09:18:13.785271  333933 cri.go:89] found id: "5206a326b6863dc499d944fee0a747773134d23171f6cea0eed24802ac4170f1"
	I1018 09:18:13.785275  333933 cri.go:89] found id: ""
	I1018 09:18:13.785319  333933 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:18:13.804974  333933 out.go:203] 
	W1018 09:18:13.807185  333933 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:18:13Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:18:13Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 09:18:13.807213  333933 out.go:285] * 
	* 
	W1018 09:18:13.811371  333933 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 09:18:13.812921  333933 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-880603 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-880603
helpers_test.go:243: (dbg) docker inspect embed-certs-880603:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1b6bc4c9714c158ab43e5d4a65bb819978ef8e9057777261e47f0a5ac38b8a4e",
	        "Created": "2025-10-18T09:15:37.716133173Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 318893,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T09:17:11.413038444Z",
	            "FinishedAt": "2025-10-18T09:17:09.924125702Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/1b6bc4c9714c158ab43e5d4a65bb819978ef8e9057777261e47f0a5ac38b8a4e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1b6bc4c9714c158ab43e5d4a65bb819978ef8e9057777261e47f0a5ac38b8a4e/hostname",
	        "HostsPath": "/var/lib/docker/containers/1b6bc4c9714c158ab43e5d4a65bb819978ef8e9057777261e47f0a5ac38b8a4e/hosts",
	        "LogPath": "/var/lib/docker/containers/1b6bc4c9714c158ab43e5d4a65bb819978ef8e9057777261e47f0a5ac38b8a4e/1b6bc4c9714c158ab43e5d4a65bb819978ef8e9057777261e47f0a5ac38b8a4e-json.log",
	        "Name": "/embed-certs-880603",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-880603:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-880603",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1b6bc4c9714c158ab43e5d4a65bb819978ef8e9057777261e47f0a5ac38b8a4e",
	                "LowerDir": "/var/lib/docker/overlay2/9fc765f8ac24538015479acdd40b3d737a7cedebc070de1fc4ec4d150a46823c-init/diff:/var/lib/docker/overlay2/76f783f469ac4c930bc111d7df4bd2b3a57bdcd762971c7ce0ba7a7b959771a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9fc765f8ac24538015479acdd40b3d737a7cedebc070de1fc4ec4d150a46823c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9fc765f8ac24538015479acdd40b3d737a7cedebc070de1fc4ec4d150a46823c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9fc765f8ac24538015479acdd40b3d737a7cedebc070de1fc4ec4d150a46823c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-880603",
	                "Source": "/var/lib/docker/volumes/embed-certs-880603/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-880603",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-880603",
	                "name.minikube.sigs.k8s.io": "embed-certs-880603",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "12933ea06a7caa33922f055301b3032f75e0dd54dc5d7c73a61ceb680577f958",
	            "SandboxKey": "/var/run/docker/netns/12933ea06a7c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-880603": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "72:01:b4:00:07:f3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "00da72598f1f33e65a58d1743a0dfc899ddee3ad08c7f711e26bf3f40d92300d",
	                    "EndpointID": "ab3db92b9a2cedd6607e0ee355bb7d94fe4cea69900e42bee9ef0ab857590231",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-880603",
	                        "1b6bc4c9714c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-880603 -n embed-certs-880603
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-880603 -n embed-certs-880603: exit status 2 (336.569148ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-880603 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-880603 logs -n 25: (1.118833965s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ image   │ old-k8s-version-951975 image list --format=json                                                                                                                                                                                               │ old-k8s-version-951975       │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ pause   │ -p old-k8s-version-951975 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-951975       │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │                     │
	│ delete  │ -p old-k8s-version-951975                                                                                                                                                                                                                     │ old-k8s-version-951975       │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ delete  │ -p old-k8s-version-951975                                                                                                                                                                                                                     │ old-k8s-version-951975       │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ start   │ -p newest-cni-444637 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-444637            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-986220 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-986220 │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-880603 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-880603           │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ start   │ -p embed-certs-880603 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-880603           │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ stop    │ -p default-k8s-diff-port-986220 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-986220 │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ image   │ no-preload-031066 image list --format=json                                                                                                                                                                                                    │ no-preload-031066            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ pause   │ -p no-preload-031066 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-031066            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-986220 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-986220 │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ start   │ -p default-k8s-diff-port-986220 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-986220 │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │                     │
	│ delete  │ -p no-preload-031066                                                                                                                                                                                                                          │ no-preload-031066            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ delete  │ -p no-preload-031066                                                                                                                                                                                                                          │ no-preload-031066            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ addons  │ enable metrics-server -p newest-cni-444637 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-444637            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │                     │
	│ stop    │ -p newest-cni-444637 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-444637            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ addons  │ enable dashboard -p newest-cni-444637 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-444637            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ start   │ -p newest-cni-444637 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-444637            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:18 UTC │
	│ image   │ newest-cni-444637 image list --format=json                                                                                                                                                                                                    │ newest-cni-444637            │ jenkins │ v1.37.0 │ 18 Oct 25 09:18 UTC │ 18 Oct 25 09:18 UTC │
	│ pause   │ -p newest-cni-444637 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-444637            │ jenkins │ v1.37.0 │ 18 Oct 25 09:18 UTC │                     │
	│ image   │ embed-certs-880603 image list --format=json                                                                                                                                                                                                   │ embed-certs-880603           │ jenkins │ v1.37.0 │ 18 Oct 25 09:18 UTC │ 18 Oct 25 09:18 UTC │
	│ pause   │ -p embed-certs-880603 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-880603           │ jenkins │ v1.37.0 │ 18 Oct 25 09:18 UTC │                     │
	│ delete  │ -p newest-cni-444637                                                                                                                                                                                                                          │ newest-cni-444637            │ jenkins │ v1.37.0 │ 18 Oct 25 09:18 UTC │ 18 Oct 25 09:18 UTC │
	│ delete  │ -p newest-cni-444637                                                                                                                                                                                                                          │ newest-cni-444637            │ jenkins │ v1.37.0 │ 18 Oct 25 09:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 09:17:54
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 09:17:54.427005  330193 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:17:54.427270  330193 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:17:54.427281  330193 out.go:374] Setting ErrFile to fd 2...
	I1018 09:17:54.427287  330193 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:17:54.427525  330193 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-5897/.minikube/bin
	I1018 09:17:54.428050  330193 out.go:368] Setting JSON to false
	I1018 09:17:54.429280  330193 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3622,"bootTime":1760775452,"procs":312,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 09:17:54.429387  330193 start.go:141] virtualization: kvm guest
	I1018 09:17:54.431635  330193 out.go:179] * [newest-cni-444637] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 09:17:54.432952  330193 notify.go:220] Checking for updates...
	I1018 09:17:54.432979  330193 out.go:179]   - MINIKUBE_LOCATION=21767
	I1018 09:17:54.434488  330193 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:17:54.435897  330193 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-5897/kubeconfig
	I1018 09:17:54.437111  330193 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-5897/.minikube
	I1018 09:17:54.438264  330193 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 09:17:54.439545  330193 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:17:54.441204  330193 config.go:182] Loaded profile config "newest-cni-444637": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:17:54.441727  330193 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:17:54.467746  330193 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 09:17:54.467827  330193 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:17:54.527403  330193 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:66 SystemTime:2025-10-18 09:17:54.515566485 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:17:54.527559  330193 docker.go:318] overlay module found
	I1018 09:17:54.529436  330193 out.go:179] * Using the docker driver based on existing profile
	I1018 09:17:54.530557  330193 start.go:305] selected driver: docker
	I1018 09:17:54.530578  330193 start.go:925] validating driver "docker" against &{Name:newest-cni-444637 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-444637 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:17:54.530680  330193 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:17:54.531357  330193 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:17:54.591156  330193 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:66 SystemTime:2025-10-18 09:17:54.580755477 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:17:54.591532  330193 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 09:17:54.591566  330193 cni.go:84] Creating CNI manager for ""
	I1018 09:17:54.591617  330193 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:17:54.591683  330193 start.go:349] cluster config:
	{Name:newest-cni-444637 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-444637 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:17:54.593449  330193 out.go:179] * Starting "newest-cni-444637" primary control-plane node in "newest-cni-444637" cluster
	I1018 09:17:54.594724  330193 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 09:17:54.596122  330193 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 09:17:54.597292  330193 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:17:54.597335  330193 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-5897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 09:17:54.597376  330193 cache.go:58] Caching tarball of preloaded images
	I1018 09:17:54.597366  330193 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 09:17:54.597499  330193 preload.go:233] Found /home/jenkins/minikube-integration/21767-5897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 09:17:54.597519  330193 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 09:17:54.597628  330193 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/config.json ...
	I1018 09:17:54.619906  330193 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 09:17:54.619924  330193 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 09:17:54.619939  330193 cache.go:232] Successfully downloaded all kic artifacts
	I1018 09:17:54.619961  330193 start.go:360] acquireMachinesLock for newest-cni-444637: {Name:mkf6974ca6fc7b22cdf212b383f50d3f090ea59b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:17:54.620020  330193 start.go:364] duration metric: took 43.166µs to acquireMachinesLock for "newest-cni-444637"
	I1018 09:17:54.620037  330193 start.go:96] Skipping create...Using existing machine configuration
	I1018 09:17:54.620042  330193 fix.go:54] fixHost starting: 
	I1018 09:17:54.620234  330193 cli_runner.go:164] Run: docker container inspect newest-cni-444637 --format={{.State.Status}}
	I1018 09:17:54.638627  330193 fix.go:112] recreateIfNeeded on newest-cni-444637: state=Stopped err=<nil>
	W1018 09:17:54.638652  330193 fix.go:138] unexpected machine state, will restart: <nil>
	W1018 09:17:51.833553  318609 pod_ready.go:104] pod "coredns-66bc5c9577-7fnw7" is not "Ready", error: <nil>
	W1018 09:17:53.833757  318609 pod_ready.go:104] pod "coredns-66bc5c9577-7fnw7" is not "Ready", error: <nil>
	W1018 09:17:56.034833  324191 pod_ready.go:104] pod "coredns-66bc5c9577-bpcsk" is not "Ready", error: <nil>
	W1018 09:17:58.534991  324191 pod_ready.go:104] pod "coredns-66bc5c9577-bpcsk" is not "Ready", error: <nil>
	I1018 09:17:54.640543  330193 out.go:252] * Restarting existing docker container for "newest-cni-444637" ...
	I1018 09:17:54.640644  330193 cli_runner.go:164] Run: docker start newest-cni-444637
	I1018 09:17:54.903916  330193 cli_runner.go:164] Run: docker container inspect newest-cni-444637 --format={{.State.Status}}
	I1018 09:17:54.923445  330193 kic.go:430] container "newest-cni-444637" state is running.
	I1018 09:17:54.923919  330193 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-444637
	I1018 09:17:54.944878  330193 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/config.json ...
	I1018 09:17:54.945143  330193 machine.go:93] provisionDockerMachine start ...
	I1018 09:17:54.945221  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:17:54.965135  330193 main.go:141] libmachine: Using SSH client type: native
	I1018 09:17:54.965422  330193 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1018 09:17:54.965438  330193 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:17:54.966008  330193 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59674->127.0.0.1:33133: read: connection reset by peer
	I1018 09:17:58.102821  330193 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-444637
	
	I1018 09:17:58.102846  330193 ubuntu.go:182] provisioning hostname "newest-cni-444637"
	I1018 09:17:58.102902  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:17:58.121992  330193 main.go:141] libmachine: Using SSH client type: native
	I1018 09:17:58.122251  330193 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1018 09:17:58.122274  330193 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-444637 && echo "newest-cni-444637" | sudo tee /etc/hostname
	I1018 09:17:58.271611  330193 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-444637
	
	I1018 09:17:58.271696  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:17:58.295116  330193 main.go:141] libmachine: Using SSH client type: native
	I1018 09:17:58.295331  330193 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1018 09:17:58.295366  330193 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-444637' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-444637/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-444637' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:17:58.435338  330193 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:17:58.435406  330193 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-5897/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-5897/.minikube}
	I1018 09:17:58.435457  330193 ubuntu.go:190] setting up certificates
	I1018 09:17:58.435470  330193 provision.go:84] configureAuth start
	I1018 09:17:58.435550  330193 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-444637
	I1018 09:17:58.454683  330193 provision.go:143] copyHostCerts
	I1018 09:17:58.454758  330193 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem, removing ...
	I1018 09:17:58.454789  330193 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem
	I1018 09:17:58.454878  330193 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem (1675 bytes)
	I1018 09:17:58.455021  330193 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem, removing ...
	I1018 09:17:58.455032  330193 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem
	I1018 09:17:58.455077  330193 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem (1078 bytes)
	I1018 09:17:58.455176  330193 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem, removing ...
	I1018 09:17:58.455185  330193 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem
	I1018 09:17:58.455229  330193 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem (1123 bytes)
	I1018 09:17:58.455323  330193 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem org=jenkins.newest-cni-444637 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-444637]
	I1018 09:17:58.651717  330193 provision.go:177] copyRemoteCerts
	I1018 09:17:58.651791  330193 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:17:58.651850  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:17:58.670990  330193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/newest-cni-444637/id_rsa Username:docker}
	I1018 09:17:58.769295  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:17:58.788403  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 09:17:58.807495  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 09:17:58.826308  330193 provision.go:87] duration metric: took 390.822036ms to configureAuth
	I1018 09:17:58.826335  330193 ubuntu.go:206] setting minikube options for container-runtime
	I1018 09:17:58.826534  330193 config.go:182] Loaded profile config "newest-cni-444637": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:17:58.826624  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:17:58.845940  330193 main.go:141] libmachine: Using SSH client type: native
	I1018 09:17:58.846169  330193 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1018 09:17:58.846191  330193 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:17:59.117215  330193 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:17:59.117238  330193 machine.go:96] duration metric: took 4.172078969s to provisionDockerMachine
	I1018 09:17:59.117253  330193 start.go:293] postStartSetup for "newest-cni-444637" (driver="docker")
	I1018 09:17:59.117266  330193 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:17:59.117338  330193 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:17:59.117401  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:17:59.136996  330193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/newest-cni-444637/id_rsa Username:docker}
	I1018 09:17:59.235549  330193 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:17:59.239452  330193 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 09:17:59.239483  330193 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 09:17:59.239505  330193 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-5897/.minikube/addons for local assets ...
	I1018 09:17:59.239563  330193 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-5897/.minikube/files for local assets ...
	I1018 09:17:59.239658  330193 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem -> 93942.pem in /etc/ssl/certs
	I1018 09:17:59.239788  330193 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:17:59.248379  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem --> /etc/ssl/certs/93942.pem (1708 bytes)
	I1018 09:17:59.268012  330193 start.go:296] duration metric: took 150.737252ms for postStartSetup
	I1018 09:17:59.268099  330193 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:17:59.268146  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:17:59.287401  330193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/newest-cni-444637/id_rsa Username:docker}
	I1018 09:17:59.382795  330193 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 09:17:59.388305  330193 fix.go:56] duration metric: took 4.768253133s for fixHost
	I1018 09:17:59.388338  330193 start.go:83] releasing machines lock for "newest-cni-444637", held for 4.76830641s
	I1018 09:17:59.388481  330193 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-444637
	I1018 09:17:59.407756  330193 ssh_runner.go:195] Run: cat /version.json
	I1018 09:17:59.407798  330193 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:17:59.407876  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:17:59.407803  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	W1018 09:17:56.333478  318609 pod_ready.go:104] pod "coredns-66bc5c9577-7fnw7" is not "Ready", error: <nil>
	I1018 09:17:58.333556  318609 pod_ready.go:94] pod "coredns-66bc5c9577-7fnw7" is "Ready"
	I1018 09:17:58.333585  318609 pod_ready.go:86] duration metric: took 36.506179321s for pod "coredns-66bc5c9577-7fnw7" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:58.336410  318609 pod_ready.go:83] waiting for pod "etcd-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:58.341932  318609 pod_ready.go:94] pod "etcd-embed-certs-880603" is "Ready"
	I1018 09:17:58.341964  318609 pod_ready.go:86] duration metric: took 5.525225ms for pod "etcd-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:58.344669  318609 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:58.349852  318609 pod_ready.go:94] pod "kube-apiserver-embed-certs-880603" is "Ready"
	I1018 09:17:58.349882  318609 pod_ready.go:86] duration metric: took 5.170321ms for pod "kube-apiserver-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:58.352067  318609 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:58.532002  318609 pod_ready.go:94] pod "kube-controller-manager-embed-certs-880603" is "Ready"
	I1018 09:17:58.532034  318609 pod_ready.go:86] duration metric: took 179.946406ms for pod "kube-controller-manager-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:58.732243  318609 pod_ready.go:83] waiting for pod "kube-proxy-k4kcs" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:59.131632  318609 pod_ready.go:94] pod "kube-proxy-k4kcs" is "Ready"
	I1018 09:17:59.131665  318609 pod_ready.go:86] duration metric: took 399.394452ms for pod "kube-proxy-k4kcs" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:59.332088  318609 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:59.734805  318609 pod_ready.go:94] pod "kube-scheduler-embed-certs-880603" is "Ready"
	I1018 09:17:59.734842  318609 pod_ready.go:86] duration metric: took 402.724813ms for pod "kube-scheduler-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:59.734856  318609 pod_ready.go:40] duration metric: took 37.912005765s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:17:59.783224  318609 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 09:17:59.785136  318609 out.go:179] * Done! kubectl is now configured to use "embed-certs-880603" cluster and "default" namespace by default
	I1018 09:17:59.428145  330193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/newest-cni-444637/id_rsa Username:docker}
	I1018 09:17:59.430455  330193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/newest-cni-444637/id_rsa Username:docker}
	I1018 09:17:59.580030  330193 ssh_runner.go:195] Run: systemctl --version
	I1018 09:17:59.587085  330193 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:17:59.625510  330193 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:17:59.630784  330193 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:17:59.630846  330193 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:17:59.639622  330193 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 09:17:59.639650  330193 start.go:495] detecting cgroup driver to use...
	I1018 09:17:59.639695  330193 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 09:17:59.639752  330193 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:17:59.654825  330193 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:17:59.668280  330193 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:17:59.668366  330193 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:17:59.683973  330193 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:17:59.698385  330193 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:17:59.790586  330193 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:17:59.892076  330193 docker.go:234] disabling docker service ...
	I1018 09:17:59.892147  330193 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:17:59.908881  330193 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:17:59.922861  330193 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:18:00.012767  330193 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:18:00.112051  330193 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:18:00.125686  330193 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:18:00.142184  330193 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 09:18:00.142248  330193 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:18:00.153446  330193 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 09:18:00.153510  330193 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:18:00.163772  330193 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:18:00.173529  330193 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:18:00.183180  330193 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:18:00.192357  330193 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:18:00.202160  330193 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:18:00.211313  330193 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:18:00.221003  330193 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:18:00.229269  330193 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:18:00.238137  330193 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:18:00.320620  330193 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:18:00.435033  330193 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:18:00.435106  330193 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:18:00.439539  330193 start.go:563] Will wait 60s for crictl version
	I1018 09:18:00.439606  330193 ssh_runner.go:195] Run: which crictl
	I1018 09:18:00.443682  330193 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 09:18:00.469987  330193 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 09:18:00.470070  330193 ssh_runner.go:195] Run: crio --version
	I1018 09:18:00.500186  330193 ssh_runner.go:195] Run: crio --version
	I1018 09:18:00.531772  330193 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 09:18:00.533155  330193 cli_runner.go:164] Run: docker network inspect newest-cni-444637 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:18:00.552284  330193 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1018 09:18:00.556833  330193 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:18:00.569469  330193 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1018 09:18:00.570643  330193 kubeadm.go:883] updating cluster {Name:newest-cni-444637 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-444637 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:18:00.570761  330193 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:18:00.570826  330193 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:18:00.604611  330193 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:18:00.604633  330193 crio.go:433] Images already preloaded, skipping extraction
	I1018 09:18:00.604679  330193 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:18:00.632395  330193 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:18:00.632438  330193 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:18:00.632446  330193 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1018 09:18:00.632555  330193 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-444637 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-444637 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:18:00.632630  330193 ssh_runner.go:195] Run: crio config
	I1018 09:18:00.683711  330193 cni.go:84] Creating CNI manager for ""
	I1018 09:18:00.683732  330193 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:18:00.683746  330193 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1018 09:18:00.683770  330193 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-444637 NodeName:newest-cni-444637 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:18:00.683897  330193 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-444637"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:18:00.683961  330193 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 09:18:00.693538  330193 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:18:00.693611  330193 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:18:00.701785  330193 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1018 09:18:00.715623  330193 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:18:00.729315  330193 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1018 09:18:00.742706  330193 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1018 09:18:00.746993  330193 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:18:00.758274  330193 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:18:00.846197  330193 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:18:00.874953  330193 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637 for IP: 192.168.103.2
	I1018 09:18:00.874980  330193 certs.go:195] generating shared ca certs ...
	I1018 09:18:00.875000  330193 certs.go:227] acquiring lock for ca certs: {Name:mk550b60d986fbbdf7b5e0015c56234b739f3162 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:18:00.875152  330193 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-5897/.minikube/ca.key
	I1018 09:18:00.875197  330193 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.key
	I1018 09:18:00.875207  330193 certs.go:257] generating profile certs ...
	I1018 09:18:00.875295  330193 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/client.key
	I1018 09:18:00.875391  330193 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/apiserver.key.d9d366ba
	I1018 09:18:00.875439  330193 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/proxy-client.key
	I1018 09:18:00.875557  330193 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/9394.pem (1338 bytes)
	W1018 09:18:00.875586  330193 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-5897/.minikube/certs/9394_empty.pem, impossibly tiny 0 bytes
	I1018 09:18:00.875596  330193 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 09:18:00.875619  330193 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:18:00.875641  330193 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:18:00.875661  330193 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem (1675 bytes)
	I1018 09:18:00.875704  330193 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem (1708 bytes)
	I1018 09:18:00.876245  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:18:00.896645  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 09:18:00.916475  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:18:00.937413  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 09:18:00.962164  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1018 09:18:00.982149  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 09:18:01.001065  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:18:01.021602  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 09:18:01.041260  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:18:01.060553  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/certs/9394.pem --> /usr/share/ca-certificates/9394.pem (1338 bytes)
	I1018 09:18:01.080521  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem --> /usr/share/ca-certificates/93942.pem (1708 bytes)
	I1018 09:18:01.099406  330193 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:18:01.112902  330193 ssh_runner.go:195] Run: openssl version
	I1018 09:18:01.119558  330193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9394.pem && ln -fs /usr/share/ca-certificates/9394.pem /etc/ssl/certs/9394.pem"
	I1018 09:18:01.128761  330193 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9394.pem
	I1018 09:18:01.133075  330193 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 08:35 /usr/share/ca-certificates/9394.pem
	I1018 09:18:01.133130  330193 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9394.pem
	I1018 09:18:01.169581  330193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9394.pem /etc/ssl/certs/51391683.0"
	I1018 09:18:01.178326  330193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93942.pem && ln -fs /usr/share/ca-certificates/93942.pem /etc/ssl/certs/93942.pem"
	I1018 09:18:01.187653  330193 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93942.pem
	I1018 09:18:01.191858  330193 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 08:35 /usr/share/ca-certificates/93942.pem
	I1018 09:18:01.191912  330193 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93942.pem
	I1018 09:18:01.227900  330193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93942.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:18:01.236865  330193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:18:01.245974  330193 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:18:01.250554  330193 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:18:01.250615  330193 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:18:01.285905  330193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:18:01.295059  330193 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:18:01.299170  330193 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 09:18:01.334401  330193 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 09:18:01.369411  330193 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 09:18:01.417245  330193 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 09:18:01.463956  330193 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 09:18:01.519260  330193 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 09:18:01.564643  330193 kubeadm.go:400] StartCluster: {Name:newest-cni-444637 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-444637 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:18:01.564725  330193 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:18:01.564799  330193 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:18:01.596025  330193 cri.go:89] found id: "014aa61b2a700319893c6b11615bd85925597f0738e1bf960657bba99a7ac5ea"
	I1018 09:18:01.596053  330193 cri.go:89] found id: "b91ae2df424fdafe037b2eea7a39a37f80929e5ab4c76c1169ce7ba3b9a4bdbd"
	I1018 09:18:01.596059  330193 cri.go:89] found id: "49cf2e65f5a6801e31db940684d60041512ed73bbf34778abdfd8025afc8b25b"
	I1018 09:18:01.596064  330193 cri.go:89] found id: "390882244d27208c7b2d7d0538a0ff970ed197d0a63b391f3e1c81bd7b8255df"
	I1018 09:18:01.596069  330193 cri.go:89] found id: ""
	I1018 09:18:01.596114  330193 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 09:18:01.609602  330193 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:18:01Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:18:01.609687  330193 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:18:01.619278  330193 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 09:18:01.619297  330193 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 09:18:01.619376  330193 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 09:18:01.628525  330193 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 09:18:01.629710  330193 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-444637" does not appear in /home/jenkins/minikube-integration/21767-5897/kubeconfig
	I1018 09:18:01.630508  330193 kubeconfig.go:62] /home/jenkins/minikube-integration/21767-5897/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-444637" cluster setting kubeconfig missing "newest-cni-444637" context setting]
	I1018 09:18:01.631708  330193 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/kubeconfig: {Name:mkbdf22e0f6c3f9f36e4cde3352b620d43ace448 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:18:01.633868  330193 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 09:18:01.643225  330193 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.103.2
	I1018 09:18:01.643268  330193 kubeadm.go:601] duration metric: took 23.964839ms to restartPrimaryControlPlane
	I1018 09:18:01.643282  330193 kubeadm.go:402] duration metric: took 78.647978ms to StartCluster
	I1018 09:18:01.643303  330193 settings.go:142] acquiring lock: {Name:mk177870d6cf7000f95346d8b9c104ade730278a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:18:01.643398  330193 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-5897/kubeconfig
	I1018 09:18:01.645409  330193 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/kubeconfig: {Name:mkbdf22e0f6c3f9f36e4cde3352b620d43ace448 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:18:01.645688  330193 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:18:01.645769  330193 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 09:18:01.645862  330193 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-444637"
	I1018 09:18:01.645882  330193 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-444637"
	W1018 09:18:01.645893  330193 addons.go:247] addon storage-provisioner should already be in state true
	I1018 09:18:01.645893  330193 addons.go:69] Setting dashboard=true in profile "newest-cni-444637"
	I1018 09:18:01.645921  330193 host.go:66] Checking if "newest-cni-444637" exists ...
	I1018 09:18:01.645934  330193 addons.go:238] Setting addon dashboard=true in "newest-cni-444637"
	W1018 09:18:01.645945  330193 addons.go:247] addon dashboard should already be in state true
	I1018 09:18:01.645945  330193 config.go:182] Loaded profile config "newest-cni-444637": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:18:01.645948  330193 addons.go:69] Setting default-storageclass=true in profile "newest-cni-444637"
	I1018 09:18:01.645973  330193 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-444637"
	I1018 09:18:01.645980  330193 host.go:66] Checking if "newest-cni-444637" exists ...
	I1018 09:18:01.646303  330193 cli_runner.go:164] Run: docker container inspect newest-cni-444637 --format={{.State.Status}}
	I1018 09:18:01.646463  330193 cli_runner.go:164] Run: docker container inspect newest-cni-444637 --format={{.State.Status}}
	I1018 09:18:01.646481  330193 cli_runner.go:164] Run: docker container inspect newest-cni-444637 --format={{.State.Status}}
	I1018 09:18:01.647698  330193 out.go:179] * Verifying Kubernetes components...
	I1018 09:18:01.649210  330193 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:18:01.673812  330193 addons.go:238] Setting addon default-storageclass=true in "newest-cni-444637"
	W1018 09:18:01.673837  330193 addons.go:247] addon default-storageclass should already be in state true
	I1018 09:18:01.673877  330193 host.go:66] Checking if "newest-cni-444637" exists ...
	I1018 09:18:01.674375  330193 cli_runner.go:164] Run: docker container inspect newest-cni-444637 --format={{.State.Status}}
	I1018 09:18:01.674516  330193 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:18:01.678901  330193 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:18:01.678924  330193 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 09:18:01.678985  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:18:01.679140  330193 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1018 09:18:01.680475  330193 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1018 09:18:01.681672  330193 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1018 09:18:01.681729  330193 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1018 09:18:01.681827  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:18:01.707736  330193 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 09:18:01.707766  330193 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 09:18:01.707826  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:18:01.713270  330193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/newest-cni-444637/id_rsa Username:docker}
	I1018 09:18:01.719016  330193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/newest-cni-444637/id_rsa Username:docker}
	I1018 09:18:01.734187  330193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/newest-cni-444637/id_rsa Username:docker}
	I1018 09:18:01.812631  330193 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:18:01.828229  330193 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:18:01.828317  330193 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:18:01.829858  330193 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:18:01.835854  330193 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1018 09:18:01.835874  330193 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1018 09:18:01.845491  330193 api_server.go:72] duration metric: took 199.769202ms to wait for apiserver process to appear ...
	I1018 09:18:01.845522  330193 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:18:01.845544  330193 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1018 09:18:01.852363  330193 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 09:18:01.854253  330193 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1018 09:18:01.854275  330193 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1018 09:18:01.872324  330193 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1018 09:18:01.872363  330193 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1018 09:18:01.891549  330193 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1018 09:18:01.891576  330193 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1018 09:18:01.910545  330193 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1018 09:18:01.910574  330193 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1018 09:18:01.928312  330193 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1018 09:18:01.928337  330193 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1018 09:18:01.942869  330193 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1018 09:18:01.942897  330193 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1018 09:18:01.957264  330193 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1018 09:18:01.957287  330193 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1018 09:18:01.971834  330193 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 09:18:01.971871  330193 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1018 09:18:01.988808  330193 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 09:18:03.360064  330193 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1018 09:18:03.360099  330193 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1018 09:18:03.360117  330193 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1018 09:18:03.416525  330193 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W1018 09:18:03.416558  330193 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I1018 09:18:03.845768  330193 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1018 09:18:03.850882  330193 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 09:18:03.850913  330193 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 09:18:03.925688  330193 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.095784279s)
	I1018 09:18:03.925778  330193 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.073377378s)
	I1018 09:18:03.925913  330193 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.937061029s)
	I1018 09:18:03.929127  330193 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-444637 addons enable metrics-server
	
	I1018 09:18:03.937380  330193 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1018 09:18:01.035250  324191 pod_ready.go:104] pod "coredns-66bc5c9577-bpcsk" is not "Ready", error: <nil>
	W1018 09:18:03.035670  324191 pod_ready.go:104] pod "coredns-66bc5c9577-bpcsk" is not "Ready", error: <nil>
	I1018 09:18:03.938934  330193 addons.go:514] duration metric: took 2.293172614s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1018 09:18:04.346493  330193 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1018 09:18:04.351148  330193 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 09:18:04.351178  330193 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 09:18:04.845878  330193 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1018 09:18:04.850252  330193 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1018 09:18:04.851396  330193 api_server.go:141] control plane version: v1.34.1
	I1018 09:18:04.851430  330193 api_server.go:131] duration metric: took 3.005900151s to wait for apiserver health ...
	I1018 09:18:04.851440  330193 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:18:04.855053  330193 system_pods.go:59] 8 kube-system pods found
	I1018 09:18:04.855092  330193 system_pods.go:61] "coredns-66bc5c9577-gc5dd" [7fab8a8d-bdb4-47d4-bf7d-d03341018666] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 09:18:04.855100  330193 system_pods.go:61] "etcd-newest-cni-444637" [b54d61ad-b52d-4343-ba3a-a64b03934319] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 09:18:04.855111  330193 system_pods.go:61] "kindnet-qmlcq" [2c82849a-5511-43a1-a300-a7f46df288ec] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1018 09:18:04.855117  330193 system_pods.go:61] "kube-apiserver-newest-cni-444637" [a9136c1f-8962-45f7-b005-05bd3f856403] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 09:18:04.855124  330193 system_pods.go:61] "kube-controller-manager-newest-cni-444637" [b8d840d7-04c3-495c-aafa-cc8a06e58f06] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 09:18:04.855130  330193 system_pods.go:61] "kube-proxy-hbkn5" [d70417da-43f2-4d8c-a088-07cea5225c34] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1018 09:18:04.855138  330193 system_pods.go:61] "kube-scheduler-newest-cni-444637" [175527c5-4260-4e39-be83-4c36417f3cbe] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 09:18:04.855142  330193 system_pods.go:61] "storage-provisioner" [b0974a78-b6ad-45c3-8241-86f8bb7bc65b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 09:18:04.855151  330193 system_pods.go:74] duration metric: took 3.706424ms to wait for pod list to return data ...
	I1018 09:18:04.855162  330193 default_sa.go:34] waiting for default service account to be created ...
	I1018 09:18:04.857785  330193 default_sa.go:45] found service account: "default"
	I1018 09:18:04.857804  330193 default_sa.go:55] duration metric: took 2.636173ms for default service account to be created ...
	I1018 09:18:04.857817  330193 kubeadm.go:586] duration metric: took 3.212102689s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 09:18:04.857837  330193 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:18:04.860449  330193 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 09:18:04.860472  330193 node_conditions.go:123] node cpu capacity is 8
	I1018 09:18:04.860486  330193 node_conditions.go:105] duration metric: took 2.642504ms to run NodePressure ...
	I1018 09:18:04.860498  330193 start.go:241] waiting for startup goroutines ...
	I1018 09:18:04.860504  330193 start.go:246] waiting for cluster config update ...
	I1018 09:18:04.860514  330193 start.go:255] writing updated cluster config ...
	I1018 09:18:04.860806  330193 ssh_runner.go:195] Run: rm -f paused
	I1018 09:18:04.910604  330193 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 09:18:04.913879  330193 out.go:179] * Done! kubectl is now configured to use "newest-cni-444637" cluster and "default" namespace by default
	W1018 09:18:05.535906  324191 pod_ready.go:104] pod "coredns-66bc5c9577-bpcsk" is not "Ready", error: <nil>
	W1018 09:18:08.034961  324191 pod_ready.go:104] pod "coredns-66bc5c9577-bpcsk" is not "Ready", error: <nil>
	W1018 09:18:10.535644  324191 pod_ready.go:104] pod "coredns-66bc5c9577-bpcsk" is not "Ready", error: <nil>
	W1018 09:18:13.036211  324191 pod_ready.go:104] pod "coredns-66bc5c9577-bpcsk" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 18 09:17:32 embed-certs-880603 crio[561]: time="2025-10-18T09:17:32.013797252Z" level=info msg="Started container" PID=1730 containerID=bae9f43ffcab3eee2b365e02c47153ce88dc92dfd2d460a2d5ec7f7b7fafbac4 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nsd84/dashboard-metrics-scraper id=72ce1277-014f-453a-af8d-c2b4254d84ed name=/runtime.v1.RuntimeService/StartContainer sandboxID=51dd13b3022bc01fa88441f94247ed30c94bc94bc6e33967bdeea83b88017b61
	Oct 18 09:17:32 embed-certs-880603 crio[561]: time="2025-10-18T09:17:32.959559177Z" level=info msg="Removing container: b244aa331b34689be43fddea7fa93ba93e92293ff85f70c8e94be81113f2eb74" id=28a524bb-d0e1-447a-a21e-bc0fe1982fd6 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 09:17:32 embed-certs-880603 crio[561]: time="2025-10-18T09:17:32.969742835Z" level=info msg="Removed container b244aa331b34689be43fddea7fa93ba93e92293ff85f70c8e94be81113f2eb74: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nsd84/dashboard-metrics-scraper" id=28a524bb-d0e1-447a-a21e-bc0fe1982fd6 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 09:17:52 embed-certs-880603 crio[561]: time="2025-10-18T09:17:52.013915148Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=ac29d7d9-69bb-4110-8bc0-2315b2cf2359 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:17:52 embed-certs-880603 crio[561]: time="2025-10-18T09:17:52.015089268Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=cc6edc12-6c8e-4c42-b498-4d9393100031 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:17:52 embed-certs-880603 crio[561]: time="2025-10-18T09:17:52.016315895Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=8fe833e9-71bd-40dc-a9b7-acb96e7cf3e4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:17:52 embed-certs-880603 crio[561]: time="2025-10-18T09:17:52.01661665Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:17:52 embed-certs-880603 crio[561]: time="2025-10-18T09:17:52.021284593Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:17:52 embed-certs-880603 crio[561]: time="2025-10-18T09:17:52.021517294Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/5faf218f6af00f667adb32e7657e838cb3ff017072ff7b2b1fb0ac1e60678a1f/merged/etc/passwd: no such file or directory"
	Oct 18 09:17:52 embed-certs-880603 crio[561]: time="2025-10-18T09:17:52.021545254Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/5faf218f6af00f667adb32e7657e838cb3ff017072ff7b2b1fb0ac1e60678a1f/merged/etc/group: no such file or directory"
	Oct 18 09:17:52 embed-certs-880603 crio[561]: time="2025-10-18T09:17:52.021772826Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:17:52 embed-certs-880603 crio[561]: time="2025-10-18T09:17:52.048224712Z" level=info msg="Created container 8bad8c86d5f84ce4f24ac728a7dc50d9a1b6a8a07e0f88ebe5640d5ce8dd72ef: kube-system/storage-provisioner/storage-provisioner" id=8fe833e9-71bd-40dc-a9b7-acb96e7cf3e4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:17:52 embed-certs-880603 crio[561]: time="2025-10-18T09:17:52.049005071Z" level=info msg="Starting container: 8bad8c86d5f84ce4f24ac728a7dc50d9a1b6a8a07e0f88ebe5640d5ce8dd72ef" id=a573b38d-5817-497a-921a-059a178d4f9c name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:17:52 embed-certs-880603 crio[561]: time="2025-10-18T09:17:52.050766445Z" level=info msg="Started container" PID=1747 containerID=8bad8c86d5f84ce4f24ac728a7dc50d9a1b6a8a07e0f88ebe5640d5ce8dd72ef description=kube-system/storage-provisioner/storage-provisioner id=a573b38d-5817-497a-921a-059a178d4f9c name=/runtime.v1.RuntimeService/StartContainer sandboxID=9436eeeacd09b92d402b2cbbfda96949569eec44e3c00365c39f17dbaa42e2a6
	Oct 18 09:17:52 embed-certs-880603 crio[561]: time="2025-10-18T09:17:52.886315023Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=f4a6fd99-77f9-41c1-95bb-fc51ddd68978 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:17:52 embed-certs-880603 crio[561]: time="2025-10-18T09:17:52.887398136Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8ff266fc-9d9e-4685-9b2f-e1d3a528edd7 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:17:52 embed-certs-880603 crio[561]: time="2025-10-18T09:17:52.888583882Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nsd84/dashboard-metrics-scraper" id=90da2caf-c48e-4766-b2ad-8a4e9469a84f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:17:52 embed-certs-880603 crio[561]: time="2025-10-18T09:17:52.888848206Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:17:52 embed-certs-880603 crio[561]: time="2025-10-18T09:17:52.894925618Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:17:52 embed-certs-880603 crio[561]: time="2025-10-18T09:17:52.895701583Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:17:52 embed-certs-880603 crio[561]: time="2025-10-18T09:17:52.923594343Z" level=info msg="Created container cb9cf0dca6916663ae3bd9589727562f6bce4ea8a5f77c65df3f1a6abcc72b7a: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nsd84/dashboard-metrics-scraper" id=90da2caf-c48e-4766-b2ad-8a4e9469a84f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:17:52 embed-certs-880603 crio[561]: time="2025-10-18T09:17:52.924409047Z" level=info msg="Starting container: cb9cf0dca6916663ae3bd9589727562f6bce4ea8a5f77c65df3f1a6abcc72b7a" id=5645e2f1-deac-4881-9cdd-e703f4db987d name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:17:52 embed-certs-880603 crio[561]: time="2025-10-18T09:17:52.926450204Z" level=info msg="Started container" PID=1763 containerID=cb9cf0dca6916663ae3bd9589727562f6bce4ea8a5f77c65df3f1a6abcc72b7a description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nsd84/dashboard-metrics-scraper id=5645e2f1-deac-4881-9cdd-e703f4db987d name=/runtime.v1.RuntimeService/StartContainer sandboxID=51dd13b3022bc01fa88441f94247ed30c94bc94bc6e33967bdeea83b88017b61
	Oct 18 09:17:53 embed-certs-880603 crio[561]: time="2025-10-18T09:17:53.024757641Z" level=info msg="Removing container: bae9f43ffcab3eee2b365e02c47153ce88dc92dfd2d460a2d5ec7f7b7fafbac4" id=6f7d597e-1845-4097-b82d-87a7f83a011c name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 09:17:53 embed-certs-880603 crio[561]: time="2025-10-18T09:17:53.034978598Z" level=info msg="Removed container bae9f43ffcab3eee2b365e02c47153ce88dc92dfd2d460a2d5ec7f7b7fafbac4: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nsd84/dashboard-metrics-scraper" id=6f7d597e-1845-4097-b82d-87a7f83a011c name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	cb9cf0dca6916       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           21 seconds ago      Exited              dashboard-metrics-scraper   2                   51dd13b3022bc       dashboard-metrics-scraper-6ffb444bf9-nsd84   kubernetes-dashboard
	8bad8c86d5f84       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           22 seconds ago      Running             storage-provisioner         1                   9436eeeacd09b       storage-provisioner                          kube-system
	5206a326b6863       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   45 seconds ago      Running             kubernetes-dashboard        0                   b8014ad19633d       kubernetes-dashboard-855c9754f9-bdrc4        kubernetes-dashboard
	647069fa7dcc1       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           53 seconds ago      Running             busybox                     1                   6ba3442f8723e       busybox                                      default
	68c43d93bcc08       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           53 seconds ago      Running             coredns                     0                   e1d993b2ab509       coredns-66bc5c9577-7fnw7                     kube-system
	29561b719e517       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Exited              storage-provisioner         0                   9436eeeacd09b       storage-provisioner                          kube-system
	7642771a96629       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           53 seconds ago      Running             kindnet-cni                 0                   658333a40c53a       kindnet-wzdm5                                kube-system
	43567ee075073       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           53 seconds ago      Running             kube-proxy                  0                   84577859be70a       kube-proxy-k4kcs                             kube-system
	bb56d20e29836       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           56 seconds ago      Running             kube-apiserver              0                   b3c30d20e1576       kube-apiserver-embed-certs-880603            kube-system
	299c2f3530014       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           56 seconds ago      Running             kube-scheduler              0                   9b865ebb6718a       kube-scheduler-embed-certs-880603            kube-system
	0e0ff398f2a3f       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           56 seconds ago      Running             kube-controller-manager     0                   afa8527d7c9e8       kube-controller-manager-embed-certs-880603   kube-system
	ec50948aa7409       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           56 seconds ago      Running             etcd                        0                   be62eb6127f67       etcd-embed-certs-880603                      kube-system
	
	
	==> coredns [68c43d93bcc08f0db42212289b551dc9b0614da25c6fa8caff073aced341e2bd] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38653 - 17524 "HINFO IN 5232044663050345143.335907963301023360. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.095079108s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-880603
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-880603
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a39cecdc22b5fb611b15c7501c7459c3b4d2820
	                    minikube.k8s.io/name=embed-certs-880603
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T09_15_55_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 09:15:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-880603
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 09:18:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 09:17:51 +0000   Sat, 18 Oct 2025 09:15:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 09:17:51 +0000   Sat, 18 Oct 2025 09:15:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 09:17:51 +0000   Sat, 18 Oct 2025 09:15:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 09:17:51 +0000   Sat, 18 Oct 2025 09:16:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-880603
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                8a50a24b-e651-4f1d-8d2e-12e3c28f7fe8
	  Boot ID:                    e8d7ef1f-87bb-488c-8381-e18fe85b484f
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-66bc5c9577-7fnw7                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m14s
	  kube-system                 etcd-embed-certs-880603                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m20s
	  kube-system                 kindnet-wzdm5                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m14s
	  kube-system                 kube-apiserver-embed-certs-880603             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 kube-controller-manager-embed-certs-880603    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 kube-proxy-k4kcs                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m14s
	  kube-system                 kube-scheduler-embed-certs-880603             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m14s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-nsd84    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-bdrc4         0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m14s                  kube-proxy       
	  Normal  Starting                 53s                    kube-proxy       
	  Normal  Starting                 2m25s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m25s (x8 over 2m25s)  kubelet          Node embed-certs-880603 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m25s (x8 over 2m25s)  kubelet          Node embed-certs-880603 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m25s (x8 over 2m25s)  kubelet          Node embed-certs-880603 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m20s                  kubelet          Node embed-certs-880603 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m20s                  kubelet          Node embed-certs-880603 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m20s                  kubelet          Node embed-certs-880603 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m20s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m15s                  node-controller  Node embed-certs-880603 event: Registered Node embed-certs-880603 in Controller
	  Normal  NodeReady                93s                    kubelet          Node embed-certs-880603 status is now: NodeReady
	  Normal  Starting                 57s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 57s)      kubelet          Node embed-certs-880603 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 57s)      kubelet          Node embed-certs-880603 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 57s)      kubelet          Node embed-certs-880603 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           51s                    node-controller  Node embed-certs-880603 event: Registered Node embed-certs-880603 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3e 73 e0 1e 4f 5c 08 06
	[  +0.001176] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9e 01 6a be c1 ed 08 06
	[  +1.096145] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 92 07 d0 c5 bc 08 06
	[  +0.000393] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff de 8d 0a a3 cc 78 08 06
	[ +17.591772] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 8a 16 36 e8 43 c0 08 06
	[  +0.000413] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3e 73 e0 1e 4f 5c 08 06
	[ +11.820741] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 36 5b 8e 46 ea 47 08 06
	[Oct18 09:15] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 28 86 11 d4 e9 08 06
	[  +0.032974] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 76 2d 83 26 2e 28 08 06
	[  +4.435535] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 22 e2 07 5a 3b 4a 08 06
	[  +0.000382] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 36 5b 8e 46 ea 47 08 06
	[ +43.809014] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 22 6f 4b 2b 7f 46 08 06
	[  +0.000367] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 62 28 86 11 d4 e9 08 06
	
	
	==> etcd [ec50948aa740993206f6c6b998952e98a6c0c34a9993daeba8381c7072181c67] <==
	{"level":"warn","ts":"2025-10-18T09:17:19.526242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:19.532651Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:19.542159Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:19.548852Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:19.555224Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:19.561750Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:19.569295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:19.576245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:19.582771Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:19.589661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:19.596557Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:19.603554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:19.610171Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:19.616941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:19.623166Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:19.629785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:19.636667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:19.647233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:19.655057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:19.661837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:19.669386Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:19.690584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:19.697250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:19.705260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:19.758784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39450","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:18:15 up  1:00,  0 user,  load average: 4.32, 3.77, 2.57
	Linux embed-certs-880603 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7642771a96629ecf015c65966266ce95ca17e3edcd86d6a51e666854ab2ddb6f] <==
	I1018 09:17:21.481115       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 09:17:21.481534       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1018 09:17:21.481721       1 main.go:148] setting mtu 1500 for CNI 
	I1018 09:17:21.481735       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 09:17:21.481766       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T09:17:21Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 09:17:21.684916       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 09:17:21.685034       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 09:17:21.685052       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 09:17:21.686186       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 09:17:22.085411       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 09:17:22.085441       1 metrics.go:72] Registering metrics
	I1018 09:17:22.085512       1 controller.go:711] "Syncing nftables rules"
	I1018 09:17:31.685555       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 09:17:31.685623       1 main.go:301] handling current node
	I1018 09:17:41.685918       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 09:17:41.685973       1 main.go:301] handling current node
	I1018 09:17:51.684949       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 09:17:51.684990       1 main.go:301] handling current node
	I1018 09:18:01.686431       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 09:18:01.686500       1 main.go:301] handling current node
	I1018 09:18:11.691296       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 09:18:11.691327       1 main.go:301] handling current node
	
	
	==> kube-apiserver [bb56d20e298366d034e8ab343121b80816150a01f06f5e3dcf23656917831fa8] <==
	I1018 09:17:20.269046       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1018 09:17:20.269114       1 aggregator.go:171] initial CRD sync complete...
	I1018 09:17:20.269130       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 09:17:20.269136       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 09:17:20.269142       1 cache.go:39] Caches are synced for autoregister controller
	I1018 09:17:20.269266       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1018 09:17:20.269318       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 09:17:20.275808       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 09:17:20.308815       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1018 09:17:20.318253       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1018 09:17:20.318306       1 policy_source.go:240] refreshing policies
	I1018 09:17:20.397633       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 09:17:20.568183       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 09:17:20.607182       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 09:17:20.633579       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 09:17:20.642582       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 09:17:20.650416       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 09:17:20.693280       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.14.148"}
	I1018 09:17:20.704896       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.199.26"}
	I1018 09:17:21.174107       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 09:17:23.906441       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 09:17:24.006754       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 09:17:24.006754       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 09:17:24.206585       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 09:17:24.206584       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [0e0ff398f2a3fe03603d026ff3c4c4aa2a99cc70201bf2049eaef07838ab4ad9] <==
	I1018 09:17:23.574490       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 09:17:23.576824       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 09:17:23.579129       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 09:17:23.581393       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1018 09:17:23.587639       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 09:17:23.587665       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 09:17:23.587675       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 09:17:23.602562       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 09:17:23.602587       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1018 09:17:23.602614       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 09:17:23.602680       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 09:17:23.602811       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 09:17:23.603197       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 09:17:23.603310       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 09:17:23.603723       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1018 09:17:23.605208       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1018 09:17:23.607475       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 09:17:23.607623       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:17:23.609759       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1018 09:17:23.609800       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 09:17:23.609854       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 09:17:23.609897       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 09:17:23.609914       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 09:17:23.609922       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 09:17:23.628490       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [43567ee0750735c42ca3e8a987a5f7de05f91d9cc6c196a312f126a0fb9db347] <==
	I1018 09:17:21.302532       1 server_linux.go:53] "Using iptables proxy"
	I1018 09:17:21.359374       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 09:17:21.460555       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 09:17:21.460600       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1018 09:17:21.460696       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 09:17:21.486232       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 09:17:21.486297       1 server_linux.go:132] "Using iptables Proxier"
	I1018 09:17:21.492960       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 09:17:21.493495       1 server.go:527] "Version info" version="v1.34.1"
	I1018 09:17:21.493583       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:17:21.495100       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 09:17:21.495127       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 09:17:21.495170       1 config.go:200] "Starting service config controller"
	I1018 09:17:21.495176       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 09:17:21.495199       1 config.go:106] "Starting endpoint slice config controller"
	I1018 09:17:21.495229       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 09:17:21.495280       1 config.go:309] "Starting node config controller"
	I1018 09:17:21.495727       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 09:17:21.595785       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 09:17:21.595812       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 09:17:21.595823       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 09:17:21.595851       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [299c2f3530014f0f00b7697904d0bbe7e76037825ca7356cddf1e7e8e4cfbd3f] <==
	I1018 09:17:19.415542       1 serving.go:386] Generated self-signed cert in-memory
	W1018 09:17:20.180956       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 09:17:20.181058       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 09:17:20.181107       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 09:17:20.181136       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 09:17:20.228835       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 09:17:20.228871       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:17:20.235233       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:17:20.235277       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:17:20.236291       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 09:17:20.236376       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 09:17:20.335470       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 09:17:24 embed-certs-880603 kubelet[718]: I1018 09:17:24.218021     718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-889gf\" (UniqueName: \"kubernetes.io/projected/3a382a27-a21c-4ec2-8631-c1534993e7c4-kube-api-access-889gf\") pod \"dashboard-metrics-scraper-6ffb444bf9-nsd84\" (UID: \"3a382a27-a21c-4ec2-8631-c1534993e7c4\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nsd84"
	Oct 18 09:17:24 embed-certs-880603 kubelet[718]: I1018 09:17:24.218045     718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/edf6770a-0607-485d-8eef-aab09553ed76-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-bdrc4\" (UID: \"edf6770a-0607-485d-8eef-aab09553ed76\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-bdrc4"
	Oct 18 09:17:28 embed-certs-880603 kubelet[718]: I1018 09:17:28.256833     718 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 18 09:17:29 embed-certs-880603 kubelet[718]: I1018 09:17:29.971533     718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-bdrc4" podStartSLOduration=1.5264020459999998 podStartE2EDuration="5.971506864s" podCreationTimestamp="2025-10-18 09:17:24 +0000 UTC" firstStartedPulling="2025-10-18 09:17:24.452059025 +0000 UTC m=+6.662591918" lastFinishedPulling="2025-10-18 09:17:28.897163822 +0000 UTC m=+11.107696736" observedRunningTime="2025-10-18 09:17:29.969359381 +0000 UTC m=+12.179892298" watchObservedRunningTime="2025-10-18 09:17:29.971506864 +0000 UTC m=+12.182039778"
	Oct 18 09:17:31 embed-certs-880603 kubelet[718]: I1018 09:17:31.952768     718 scope.go:117] "RemoveContainer" containerID="b244aa331b34689be43fddea7fa93ba93e92293ff85f70c8e94be81113f2eb74"
	Oct 18 09:17:32 embed-certs-880603 kubelet[718]: I1018 09:17:32.957918     718 scope.go:117] "RemoveContainer" containerID="b244aa331b34689be43fddea7fa93ba93e92293ff85f70c8e94be81113f2eb74"
	Oct 18 09:17:32 embed-certs-880603 kubelet[718]: I1018 09:17:32.958071     718 scope.go:117] "RemoveContainer" containerID="bae9f43ffcab3eee2b365e02c47153ce88dc92dfd2d460a2d5ec7f7b7fafbac4"
	Oct 18 09:17:32 embed-certs-880603 kubelet[718]: E1018 09:17:32.958278     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nsd84_kubernetes-dashboard(3a382a27-a21c-4ec2-8631-c1534993e7c4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nsd84" podUID="3a382a27-a21c-4ec2-8631-c1534993e7c4"
	Oct 18 09:17:33 embed-certs-880603 kubelet[718]: I1018 09:17:33.963882     718 scope.go:117] "RemoveContainer" containerID="bae9f43ffcab3eee2b365e02c47153ce88dc92dfd2d460a2d5ec7f7b7fafbac4"
	Oct 18 09:17:33 embed-certs-880603 kubelet[718]: E1018 09:17:33.964045     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nsd84_kubernetes-dashboard(3a382a27-a21c-4ec2-8631-c1534993e7c4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nsd84" podUID="3a382a27-a21c-4ec2-8631-c1534993e7c4"
	Oct 18 09:17:39 embed-certs-880603 kubelet[718]: I1018 09:17:39.320781     718 scope.go:117] "RemoveContainer" containerID="bae9f43ffcab3eee2b365e02c47153ce88dc92dfd2d460a2d5ec7f7b7fafbac4"
	Oct 18 09:17:39 embed-certs-880603 kubelet[718]: E1018 09:17:39.321128     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nsd84_kubernetes-dashboard(3a382a27-a21c-4ec2-8631-c1534993e7c4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nsd84" podUID="3a382a27-a21c-4ec2-8631-c1534993e7c4"
	Oct 18 09:17:52 embed-certs-880603 kubelet[718]: I1018 09:17:52.013442     718 scope.go:117] "RemoveContainer" containerID="29561b719e51746ae7b9206a1fb65330b21daa2b29035245490abc4a2b5912d8"
	Oct 18 09:17:52 embed-certs-880603 kubelet[718]: I1018 09:17:52.885739     718 scope.go:117] "RemoveContainer" containerID="bae9f43ffcab3eee2b365e02c47153ce88dc92dfd2d460a2d5ec7f7b7fafbac4"
	Oct 18 09:17:53 embed-certs-880603 kubelet[718]: I1018 09:17:53.023462     718 scope.go:117] "RemoveContainer" containerID="bae9f43ffcab3eee2b365e02c47153ce88dc92dfd2d460a2d5ec7f7b7fafbac4"
	Oct 18 09:17:53 embed-certs-880603 kubelet[718]: I1018 09:17:53.023670     718 scope.go:117] "RemoveContainer" containerID="cb9cf0dca6916663ae3bd9589727562f6bce4ea8a5f77c65df3f1a6abcc72b7a"
	Oct 18 09:17:53 embed-certs-880603 kubelet[718]: E1018 09:17:53.023851     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nsd84_kubernetes-dashboard(3a382a27-a21c-4ec2-8631-c1534993e7c4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nsd84" podUID="3a382a27-a21c-4ec2-8631-c1534993e7c4"
	Oct 18 09:17:59 embed-certs-880603 kubelet[718]: I1018 09:17:59.319239     718 scope.go:117] "RemoveContainer" containerID="cb9cf0dca6916663ae3bd9589727562f6bce4ea8a5f77c65df3f1a6abcc72b7a"
	Oct 18 09:17:59 embed-certs-880603 kubelet[718]: E1018 09:17:59.319511     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nsd84_kubernetes-dashboard(3a382a27-a21c-4ec2-8631-c1534993e7c4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nsd84" podUID="3a382a27-a21c-4ec2-8631-c1534993e7c4"
	Oct 18 09:18:11 embed-certs-880603 kubelet[718]: I1018 09:18:11.886128     718 scope.go:117] "RemoveContainer" containerID="cb9cf0dca6916663ae3bd9589727562f6bce4ea8a5f77c65df3f1a6abcc72b7a"
	Oct 18 09:18:11 embed-certs-880603 kubelet[718]: E1018 09:18:11.886363     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nsd84_kubernetes-dashboard(3a382a27-a21c-4ec2-8631-c1534993e7c4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nsd84" podUID="3a382a27-a21c-4ec2-8631-c1534993e7c4"
	Oct 18 09:18:11 embed-certs-880603 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 09:18:11 embed-certs-880603 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 09:18:11 embed-certs-880603 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 18 09:18:11 embed-certs-880603 systemd[1]: kubelet.service: Consumed 1.847s CPU time.
	
	
	==> kubernetes-dashboard [5206a326b6863dc499d944fee0a747773134d23171f6cea0eed24802ac4170f1] <==
	2025/10/18 09:17:28 Starting overwatch
	2025/10/18 09:17:28 Using namespace: kubernetes-dashboard
	2025/10/18 09:17:28 Using in-cluster config to connect to apiserver
	2025/10/18 09:17:28 Using secret token for csrf signing
	2025/10/18 09:17:28 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 09:17:28 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 09:17:28 Successful initial request to the apiserver, version: v1.34.1
	2025/10/18 09:17:28 Generating JWE encryption key
	2025/10/18 09:17:28 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 09:17:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 09:17:29 Initializing JWE encryption key from synchronized object
	2025/10/18 09:17:29 Creating in-cluster Sidecar client
	2025/10/18 09:17:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 09:17:29 Serving insecurely on HTTP port: 9090
	2025/10/18 09:17:59 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [29561b719e51746ae7b9206a1fb65330b21daa2b29035245490abc4a2b5912d8] <==
	I1018 09:17:21.264125       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 09:17:51.267756       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [8bad8c86d5f84ce4f24ac728a7dc50d9a1b6a8a07e0f88ebe5640d5ce8dd72ef] <==
	I1018 09:17:52.064277       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 09:17:52.071428       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 09:17:52.071488       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 09:17:52.073962       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:55.529535       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:59.793622       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:18:03.394008       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:18:06.448551       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:18:09.470842       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:18:09.475681       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 09:18:09.475865       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 09:18:09.475999       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5e8e0df5-6f93-48eb-99a3-eaa105313a85", APIVersion:"v1", ResourceVersion:"669", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-880603_eaa60fd9-6c0a-4297-b651-835586687f82 became leader
	I1018 09:18:09.476020       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-880603_eaa60fd9-6c0a-4297-b651-835586687f82!
	W1018 09:18:09.478921       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:18:09.482576       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 09:18:09.577358       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-880603_eaa60fd9-6c0a-4297-b651-835586687f82!
	W1018 09:18:11.486313       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:18:11.491337       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:18:13.494710       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:18:13.502810       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-880603 -n embed-certs-880603
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-880603 -n embed-certs-880603: exit status 2 (331.183299ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-880603 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-880603
helpers_test.go:243: (dbg) docker inspect embed-certs-880603:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1b6bc4c9714c158ab43e5d4a65bb819978ef8e9057777261e47f0a5ac38b8a4e",
	        "Created": "2025-10-18T09:15:37.716133173Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 318893,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T09:17:11.413038444Z",
	            "FinishedAt": "2025-10-18T09:17:09.924125702Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/1b6bc4c9714c158ab43e5d4a65bb819978ef8e9057777261e47f0a5ac38b8a4e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1b6bc4c9714c158ab43e5d4a65bb819978ef8e9057777261e47f0a5ac38b8a4e/hostname",
	        "HostsPath": "/var/lib/docker/containers/1b6bc4c9714c158ab43e5d4a65bb819978ef8e9057777261e47f0a5ac38b8a4e/hosts",
	        "LogPath": "/var/lib/docker/containers/1b6bc4c9714c158ab43e5d4a65bb819978ef8e9057777261e47f0a5ac38b8a4e/1b6bc4c9714c158ab43e5d4a65bb819978ef8e9057777261e47f0a5ac38b8a4e-json.log",
	        "Name": "/embed-certs-880603",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-880603:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-880603",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1b6bc4c9714c158ab43e5d4a65bb819978ef8e9057777261e47f0a5ac38b8a4e",
	                "LowerDir": "/var/lib/docker/overlay2/9fc765f8ac24538015479acdd40b3d737a7cedebc070de1fc4ec4d150a46823c-init/diff:/var/lib/docker/overlay2/76f783f469ac4c930bc111d7df4bd2b3a57bdcd762971c7ce0ba7a7b959771a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9fc765f8ac24538015479acdd40b3d737a7cedebc070de1fc4ec4d150a46823c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9fc765f8ac24538015479acdd40b3d737a7cedebc070de1fc4ec4d150a46823c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9fc765f8ac24538015479acdd40b3d737a7cedebc070de1fc4ec4d150a46823c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-880603",
	                "Source": "/var/lib/docker/volumes/embed-certs-880603/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-880603",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-880603",
	                "name.minikube.sigs.k8s.io": "embed-certs-880603",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "12933ea06a7caa33922f055301b3032f75e0dd54dc5d7c73a61ceb680577f958",
	            "SandboxKey": "/var/run/docker/netns/12933ea06a7c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-880603": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "72:01:b4:00:07:f3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "00da72598f1f33e65a58d1743a0dfc899ddee3ad08c7f711e26bf3f40d92300d",
	                    "EndpointID": "ab3db92b9a2cedd6607e0ee355bb7d94fe4cea69900e42bee9ef0ab857590231",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-880603",
	                        "1b6bc4c9714c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-880603 -n embed-certs-880603
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-880603 -n embed-certs-880603: exit status 2 (319.570833ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-880603 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-880603 logs -n 25: (1.114979579s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ image   │ old-k8s-version-951975 image list --format=json                                                                                                                                                                                               │ old-k8s-version-951975       │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │ 18 Oct 25 09:16 UTC │
	│ pause   │ -p old-k8s-version-951975 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-951975       │ jenkins │ v1.37.0 │ 18 Oct 25 09:16 UTC │                     │
	│ delete  │ -p old-k8s-version-951975                                                                                                                                                                                                                     │ old-k8s-version-951975       │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ delete  │ -p old-k8s-version-951975                                                                                                                                                                                                                     │ old-k8s-version-951975       │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ start   │ -p newest-cni-444637 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-444637            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-986220 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-986220 │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-880603 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-880603           │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ start   │ -p embed-certs-880603 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-880603           │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ stop    │ -p default-k8s-diff-port-986220 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-986220 │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ image   │ no-preload-031066 image list --format=json                                                                                                                                                                                                    │ no-preload-031066            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ pause   │ -p no-preload-031066 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-031066            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-986220 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-986220 │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ start   │ -p default-k8s-diff-port-986220 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-986220 │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │                     │
	│ delete  │ -p no-preload-031066                                                                                                                                                                                                                          │ no-preload-031066            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ delete  │ -p no-preload-031066                                                                                                                                                                                                                          │ no-preload-031066            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ addons  │ enable metrics-server -p newest-cni-444637 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-444637            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │                     │
	│ stop    │ -p newest-cni-444637 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-444637            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ addons  │ enable dashboard -p newest-cni-444637 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-444637            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ start   │ -p newest-cni-444637 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-444637            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:18 UTC │
	│ image   │ newest-cni-444637 image list --format=json                                                                                                                                                                                                    │ newest-cni-444637            │ jenkins │ v1.37.0 │ 18 Oct 25 09:18 UTC │ 18 Oct 25 09:18 UTC │
	│ pause   │ -p newest-cni-444637 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-444637            │ jenkins │ v1.37.0 │ 18 Oct 25 09:18 UTC │                     │
	│ image   │ embed-certs-880603 image list --format=json                                                                                                                                                                                                   │ embed-certs-880603           │ jenkins │ v1.37.0 │ 18 Oct 25 09:18 UTC │ 18 Oct 25 09:18 UTC │
	│ pause   │ -p embed-certs-880603 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-880603           │ jenkins │ v1.37.0 │ 18 Oct 25 09:18 UTC │                     │
	│ delete  │ -p newest-cni-444637                                                                                                                                                                                                                          │ newest-cni-444637            │ jenkins │ v1.37.0 │ 18 Oct 25 09:18 UTC │ 18 Oct 25 09:18 UTC │
	│ delete  │ -p newest-cni-444637                                                                                                                                                                                                                          │ newest-cni-444637            │ jenkins │ v1.37.0 │ 18 Oct 25 09:18 UTC │ 18 Oct 25 09:18 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 09:17:54
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 09:17:54.427005  330193 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:17:54.427270  330193 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:17:54.427281  330193 out.go:374] Setting ErrFile to fd 2...
	I1018 09:17:54.427287  330193 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:17:54.427525  330193 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-5897/.minikube/bin
	I1018 09:17:54.428050  330193 out.go:368] Setting JSON to false
	I1018 09:17:54.429280  330193 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3622,"bootTime":1760775452,"procs":312,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 09:17:54.429387  330193 start.go:141] virtualization: kvm guest
	I1018 09:17:54.431635  330193 out.go:179] * [newest-cni-444637] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 09:17:54.432952  330193 notify.go:220] Checking for updates...
	I1018 09:17:54.432979  330193 out.go:179]   - MINIKUBE_LOCATION=21767
	I1018 09:17:54.434488  330193 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:17:54.435897  330193 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-5897/kubeconfig
	I1018 09:17:54.437111  330193 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-5897/.minikube
	I1018 09:17:54.438264  330193 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 09:17:54.439545  330193 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:17:54.441204  330193 config.go:182] Loaded profile config "newest-cni-444637": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:17:54.441727  330193 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:17:54.467746  330193 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 09:17:54.467827  330193 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:17:54.527403  330193 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:66 SystemTime:2025-10-18 09:17:54.515566485 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:17:54.527559  330193 docker.go:318] overlay module found
	I1018 09:17:54.529436  330193 out.go:179] * Using the docker driver based on existing profile
	I1018 09:17:54.530557  330193 start.go:305] selected driver: docker
	I1018 09:17:54.530578  330193 start.go:925] validating driver "docker" against &{Name:newest-cni-444637 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-444637 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:17:54.530680  330193 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:17:54.531357  330193 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:17:54.591156  330193 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:66 SystemTime:2025-10-18 09:17:54.580755477 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:17:54.591532  330193 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 09:17:54.591566  330193 cni.go:84] Creating CNI manager for ""
	I1018 09:17:54.591617  330193 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:17:54.591683  330193 start.go:349] cluster config:
	{Name:newest-cni-444637 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-444637 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:17:54.593449  330193 out.go:179] * Starting "newest-cni-444637" primary control-plane node in "newest-cni-444637" cluster
	I1018 09:17:54.594724  330193 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 09:17:54.596122  330193 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 09:17:54.597292  330193 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:17:54.597335  330193 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-5897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 09:17:54.597376  330193 cache.go:58] Caching tarball of preloaded images
	I1018 09:17:54.597366  330193 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 09:17:54.597499  330193 preload.go:233] Found /home/jenkins/minikube-integration/21767-5897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 09:17:54.597519  330193 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 09:17:54.597628  330193 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/config.json ...
	I1018 09:17:54.619906  330193 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 09:17:54.619924  330193 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 09:17:54.619939  330193 cache.go:232] Successfully downloaded all kic artifacts
	I1018 09:17:54.619961  330193 start.go:360] acquireMachinesLock for newest-cni-444637: {Name:mkf6974ca6fc7b22cdf212b383f50d3f090ea59b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:17:54.620020  330193 start.go:364] duration metric: took 43.166µs to acquireMachinesLock for "newest-cni-444637"
	I1018 09:17:54.620037  330193 start.go:96] Skipping create...Using existing machine configuration
	I1018 09:17:54.620042  330193 fix.go:54] fixHost starting: 
	I1018 09:17:54.620234  330193 cli_runner.go:164] Run: docker container inspect newest-cni-444637 --format={{.State.Status}}
	I1018 09:17:54.638627  330193 fix.go:112] recreateIfNeeded on newest-cni-444637: state=Stopped err=<nil>
	W1018 09:17:54.638652  330193 fix.go:138] unexpected machine state, will restart: <nil>
	W1018 09:17:51.833553  318609 pod_ready.go:104] pod "coredns-66bc5c9577-7fnw7" is not "Ready", error: <nil>
	W1018 09:17:53.833757  318609 pod_ready.go:104] pod "coredns-66bc5c9577-7fnw7" is not "Ready", error: <nil>
	W1018 09:17:56.034833  324191 pod_ready.go:104] pod "coredns-66bc5c9577-bpcsk" is not "Ready", error: <nil>
	W1018 09:17:58.534991  324191 pod_ready.go:104] pod "coredns-66bc5c9577-bpcsk" is not "Ready", error: <nil>
	I1018 09:17:54.640543  330193 out.go:252] * Restarting existing docker container for "newest-cni-444637" ...
	I1018 09:17:54.640644  330193 cli_runner.go:164] Run: docker start newest-cni-444637
	I1018 09:17:54.903916  330193 cli_runner.go:164] Run: docker container inspect newest-cni-444637 --format={{.State.Status}}
	I1018 09:17:54.923445  330193 kic.go:430] container "newest-cni-444637" state is running.
	I1018 09:17:54.923919  330193 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-444637
	I1018 09:17:54.944878  330193 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/config.json ...
	I1018 09:17:54.945143  330193 machine.go:93] provisionDockerMachine start ...
	I1018 09:17:54.945221  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:17:54.965135  330193 main.go:141] libmachine: Using SSH client type: native
	I1018 09:17:54.965422  330193 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1018 09:17:54.965438  330193 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:17:54.966008  330193 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59674->127.0.0.1:33133: read: connection reset by peer
	I1018 09:17:58.102821  330193 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-444637
	
	I1018 09:17:58.102846  330193 ubuntu.go:182] provisioning hostname "newest-cni-444637"
	I1018 09:17:58.102902  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:17:58.121992  330193 main.go:141] libmachine: Using SSH client type: native
	I1018 09:17:58.122251  330193 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1018 09:17:58.122274  330193 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-444637 && echo "newest-cni-444637" | sudo tee /etc/hostname
	I1018 09:17:58.271611  330193 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-444637
	
	I1018 09:17:58.271696  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:17:58.295116  330193 main.go:141] libmachine: Using SSH client type: native
	I1018 09:17:58.295331  330193 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1018 09:17:58.295366  330193 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-444637' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-444637/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-444637' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:17:58.435338  330193 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:17:58.435406  330193 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-5897/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-5897/.minikube}
	I1018 09:17:58.435457  330193 ubuntu.go:190] setting up certificates
	I1018 09:17:58.435470  330193 provision.go:84] configureAuth start
	I1018 09:17:58.435550  330193 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-444637
	I1018 09:17:58.454683  330193 provision.go:143] copyHostCerts
	I1018 09:17:58.454758  330193 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem, removing ...
	I1018 09:17:58.454789  330193 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem
	I1018 09:17:58.454878  330193 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem (1675 bytes)
	I1018 09:17:58.455021  330193 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem, removing ...
	I1018 09:17:58.455032  330193 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem
	I1018 09:17:58.455077  330193 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem (1078 bytes)
	I1018 09:17:58.455176  330193 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem, removing ...
	I1018 09:17:58.455185  330193 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem
	I1018 09:17:58.455229  330193 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem (1123 bytes)
	I1018 09:17:58.455323  330193 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem org=jenkins.newest-cni-444637 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-444637]
	I1018 09:17:58.651717  330193 provision.go:177] copyRemoteCerts
	I1018 09:17:58.651791  330193 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:17:58.651850  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:17:58.670990  330193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/newest-cni-444637/id_rsa Username:docker}
	I1018 09:17:58.769295  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:17:58.788403  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 09:17:58.807495  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 09:17:58.826308  330193 provision.go:87] duration metric: took 390.822036ms to configureAuth
	I1018 09:17:58.826335  330193 ubuntu.go:206] setting minikube options for container-runtime
	I1018 09:17:58.826534  330193 config.go:182] Loaded profile config "newest-cni-444637": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:17:58.826624  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:17:58.845940  330193 main.go:141] libmachine: Using SSH client type: native
	I1018 09:17:58.846169  330193 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1018 09:17:58.846191  330193 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:17:59.117215  330193 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:17:59.117238  330193 machine.go:96] duration metric: took 4.172078969s to provisionDockerMachine
	I1018 09:17:59.117253  330193 start.go:293] postStartSetup for "newest-cni-444637" (driver="docker")
	I1018 09:17:59.117266  330193 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:17:59.117338  330193 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:17:59.117401  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:17:59.136996  330193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/newest-cni-444637/id_rsa Username:docker}
	I1018 09:17:59.235549  330193 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:17:59.239452  330193 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 09:17:59.239483  330193 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 09:17:59.239505  330193 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-5897/.minikube/addons for local assets ...
	I1018 09:17:59.239563  330193 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-5897/.minikube/files for local assets ...
	I1018 09:17:59.239658  330193 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem -> 93942.pem in /etc/ssl/certs
	I1018 09:17:59.239788  330193 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:17:59.248379  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem --> /etc/ssl/certs/93942.pem (1708 bytes)
	I1018 09:17:59.268012  330193 start.go:296] duration metric: took 150.737252ms for postStartSetup
	I1018 09:17:59.268099  330193 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:17:59.268146  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:17:59.287401  330193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/newest-cni-444637/id_rsa Username:docker}
	I1018 09:17:59.382795  330193 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 09:17:59.388305  330193 fix.go:56] duration metric: took 4.768253133s for fixHost
	I1018 09:17:59.388338  330193 start.go:83] releasing machines lock for "newest-cni-444637", held for 4.76830641s
	I1018 09:17:59.388481  330193 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-444637
	I1018 09:17:59.407756  330193 ssh_runner.go:195] Run: cat /version.json
	I1018 09:17:59.407798  330193 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:17:59.407876  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:17:59.407803  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	W1018 09:17:56.333478  318609 pod_ready.go:104] pod "coredns-66bc5c9577-7fnw7" is not "Ready", error: <nil>
	I1018 09:17:58.333556  318609 pod_ready.go:94] pod "coredns-66bc5c9577-7fnw7" is "Ready"
	I1018 09:17:58.333585  318609 pod_ready.go:86] duration metric: took 36.506179321s for pod "coredns-66bc5c9577-7fnw7" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:58.336410  318609 pod_ready.go:83] waiting for pod "etcd-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:58.341932  318609 pod_ready.go:94] pod "etcd-embed-certs-880603" is "Ready"
	I1018 09:17:58.341964  318609 pod_ready.go:86] duration metric: took 5.525225ms for pod "etcd-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:58.344669  318609 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:58.349852  318609 pod_ready.go:94] pod "kube-apiserver-embed-certs-880603" is "Ready"
	I1018 09:17:58.349882  318609 pod_ready.go:86] duration metric: took 5.170321ms for pod "kube-apiserver-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:58.352067  318609 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:58.532002  318609 pod_ready.go:94] pod "kube-controller-manager-embed-certs-880603" is "Ready"
	I1018 09:17:58.532034  318609 pod_ready.go:86] duration metric: took 179.946406ms for pod "kube-controller-manager-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:58.732243  318609 pod_ready.go:83] waiting for pod "kube-proxy-k4kcs" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:59.131632  318609 pod_ready.go:94] pod "kube-proxy-k4kcs" is "Ready"
	I1018 09:17:59.131665  318609 pod_ready.go:86] duration metric: took 399.394452ms for pod "kube-proxy-k4kcs" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:59.332088  318609 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:59.734805  318609 pod_ready.go:94] pod "kube-scheduler-embed-certs-880603" is "Ready"
	I1018 09:17:59.734842  318609 pod_ready.go:86] duration metric: took 402.724813ms for pod "kube-scheduler-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:59.734856  318609 pod_ready.go:40] duration metric: took 37.912005765s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:17:59.783224  318609 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 09:17:59.785136  318609 out.go:179] * Done! kubectl is now configured to use "embed-certs-880603" cluster and "default" namespace by default
	I1018 09:17:59.428145  330193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/newest-cni-444637/id_rsa Username:docker}
	I1018 09:17:59.430455  330193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/newest-cni-444637/id_rsa Username:docker}
	I1018 09:17:59.580030  330193 ssh_runner.go:195] Run: systemctl --version
	I1018 09:17:59.587085  330193 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:17:59.625510  330193 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:17:59.630784  330193 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:17:59.630846  330193 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:17:59.639622  330193 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 09:17:59.639650  330193 start.go:495] detecting cgroup driver to use...
	I1018 09:17:59.639695  330193 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 09:17:59.639752  330193 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:17:59.654825  330193 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:17:59.668280  330193 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:17:59.668366  330193 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:17:59.683973  330193 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:17:59.698385  330193 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:17:59.790586  330193 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:17:59.892076  330193 docker.go:234] disabling docker service ...
	I1018 09:17:59.892147  330193 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:17:59.908881  330193 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:17:59.922861  330193 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:18:00.012767  330193 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:18:00.112051  330193 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:18:00.125686  330193 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:18:00.142184  330193 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 09:18:00.142248  330193 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:18:00.153446  330193 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 09:18:00.153510  330193 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:18:00.163772  330193 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:18:00.173529  330193 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:18:00.183180  330193 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:18:00.192357  330193 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:18:00.202160  330193 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:18:00.211313  330193 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:18:00.221003  330193 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:18:00.229269  330193 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:18:00.238137  330193 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:18:00.320620  330193 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:18:00.435033  330193 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:18:00.435106  330193 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:18:00.439539  330193 start.go:563] Will wait 60s for crictl version
	I1018 09:18:00.439606  330193 ssh_runner.go:195] Run: which crictl
	I1018 09:18:00.443682  330193 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 09:18:00.469987  330193 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 09:18:00.470070  330193 ssh_runner.go:195] Run: crio --version
	I1018 09:18:00.500186  330193 ssh_runner.go:195] Run: crio --version
	I1018 09:18:00.531772  330193 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 09:18:00.533155  330193 cli_runner.go:164] Run: docker network inspect newest-cni-444637 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:18:00.552284  330193 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1018 09:18:00.556833  330193 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:18:00.569469  330193 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1018 09:18:00.570643  330193 kubeadm.go:883] updating cluster {Name:newest-cni-444637 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-444637 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:18:00.570761  330193 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:18:00.570826  330193 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:18:00.604611  330193 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:18:00.604633  330193 crio.go:433] Images already preloaded, skipping extraction
	I1018 09:18:00.604679  330193 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:18:00.632395  330193 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:18:00.632438  330193 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:18:00.632446  330193 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1018 09:18:00.632555  330193 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-444637 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-444637 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:18:00.632630  330193 ssh_runner.go:195] Run: crio config
	I1018 09:18:00.683711  330193 cni.go:84] Creating CNI manager for ""
	I1018 09:18:00.683732  330193 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:18:00.683746  330193 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1018 09:18:00.683770  330193 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-444637 NodeName:newest-cni-444637 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:18:00.683897  330193 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-444637"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:18:00.683961  330193 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 09:18:00.693538  330193 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:18:00.693611  330193 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:18:00.701785  330193 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1018 09:18:00.715623  330193 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:18:00.729315  330193 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1018 09:18:00.742706  330193 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1018 09:18:00.746993  330193 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:18:00.758274  330193 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:18:00.846197  330193 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:18:00.874953  330193 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637 for IP: 192.168.103.2
	I1018 09:18:00.874980  330193 certs.go:195] generating shared ca certs ...
	I1018 09:18:00.875000  330193 certs.go:227] acquiring lock for ca certs: {Name:mk550b60d986fbbdf7b5e0015c56234b739f3162 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:18:00.875152  330193 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-5897/.minikube/ca.key
	I1018 09:18:00.875197  330193 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.key
	I1018 09:18:00.875207  330193 certs.go:257] generating profile certs ...
	I1018 09:18:00.875295  330193 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/client.key
	I1018 09:18:00.875391  330193 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/apiserver.key.d9d366ba
	I1018 09:18:00.875439  330193 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/proxy-client.key
	I1018 09:18:00.875557  330193 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/9394.pem (1338 bytes)
	W1018 09:18:00.875586  330193 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-5897/.minikube/certs/9394_empty.pem, impossibly tiny 0 bytes
	I1018 09:18:00.875596  330193 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 09:18:00.875619  330193 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:18:00.875641  330193 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:18:00.875661  330193 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem (1675 bytes)
	I1018 09:18:00.875704  330193 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem (1708 bytes)
	I1018 09:18:00.876245  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:18:00.896645  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 09:18:00.916475  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:18:00.937413  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 09:18:00.962164  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1018 09:18:00.982149  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 09:18:01.001065  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:18:01.021602  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 09:18:01.041260  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:18:01.060553  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/certs/9394.pem --> /usr/share/ca-certificates/9394.pem (1338 bytes)
	I1018 09:18:01.080521  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem --> /usr/share/ca-certificates/93942.pem (1708 bytes)
	I1018 09:18:01.099406  330193 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:18:01.112902  330193 ssh_runner.go:195] Run: openssl version
	I1018 09:18:01.119558  330193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9394.pem && ln -fs /usr/share/ca-certificates/9394.pem /etc/ssl/certs/9394.pem"
	I1018 09:18:01.128761  330193 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9394.pem
	I1018 09:18:01.133075  330193 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 08:35 /usr/share/ca-certificates/9394.pem
	I1018 09:18:01.133130  330193 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9394.pem
	I1018 09:18:01.169581  330193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9394.pem /etc/ssl/certs/51391683.0"
	I1018 09:18:01.178326  330193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93942.pem && ln -fs /usr/share/ca-certificates/93942.pem /etc/ssl/certs/93942.pem"
	I1018 09:18:01.187653  330193 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93942.pem
	I1018 09:18:01.191858  330193 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 08:35 /usr/share/ca-certificates/93942.pem
	I1018 09:18:01.191912  330193 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93942.pem
	I1018 09:18:01.227900  330193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93942.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:18:01.236865  330193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:18:01.245974  330193 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:18:01.250554  330193 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:18:01.250615  330193 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:18:01.285905  330193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:18:01.295059  330193 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:18:01.299170  330193 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 09:18:01.334401  330193 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 09:18:01.369411  330193 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 09:18:01.417245  330193 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 09:18:01.463956  330193 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 09:18:01.519260  330193 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 09:18:01.564643  330193 kubeadm.go:400] StartCluster: {Name:newest-cni-444637 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-444637 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:18:01.564725  330193 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:18:01.564799  330193 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:18:01.596025  330193 cri.go:89] found id: "014aa61b2a700319893c6b11615bd85925597f0738e1bf960657bba99a7ac5ea"
	I1018 09:18:01.596053  330193 cri.go:89] found id: "b91ae2df424fdafe037b2eea7a39a37f80929e5ab4c76c1169ce7ba3b9a4bdbd"
	I1018 09:18:01.596059  330193 cri.go:89] found id: "49cf2e65f5a6801e31db940684d60041512ed73bbf34778abdfd8025afc8b25b"
	I1018 09:18:01.596064  330193 cri.go:89] found id: "390882244d27208c7b2d7d0538a0ff970ed197d0a63b391f3e1c81bd7b8255df"
	I1018 09:18:01.596069  330193 cri.go:89] found id: ""
	I1018 09:18:01.596114  330193 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 09:18:01.609602  330193 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:18:01Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:18:01.609687  330193 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:18:01.619278  330193 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 09:18:01.619297  330193 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 09:18:01.619376  330193 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 09:18:01.628525  330193 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 09:18:01.629710  330193 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-444637" does not appear in /home/jenkins/minikube-integration/21767-5897/kubeconfig
	I1018 09:18:01.630508  330193 kubeconfig.go:62] /home/jenkins/minikube-integration/21767-5897/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-444637" cluster setting kubeconfig missing "newest-cni-444637" context setting]
	I1018 09:18:01.631708  330193 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/kubeconfig: {Name:mkbdf22e0f6c3f9f36e4cde3352b620d43ace448 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:18:01.633868  330193 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 09:18:01.643225  330193 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.103.2
	I1018 09:18:01.643268  330193 kubeadm.go:601] duration metric: took 23.964839ms to restartPrimaryControlPlane
	I1018 09:18:01.643282  330193 kubeadm.go:402] duration metric: took 78.647978ms to StartCluster
	I1018 09:18:01.643303  330193 settings.go:142] acquiring lock: {Name:mk177870d6cf7000f95346d8b9c104ade730278a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:18:01.643398  330193 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-5897/kubeconfig
	I1018 09:18:01.645409  330193 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/kubeconfig: {Name:mkbdf22e0f6c3f9f36e4cde3352b620d43ace448 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:18:01.645688  330193 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:18:01.645769  330193 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 09:18:01.645862  330193 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-444637"
	I1018 09:18:01.645882  330193 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-444637"
	W1018 09:18:01.645893  330193 addons.go:247] addon storage-provisioner should already be in state true
	I1018 09:18:01.645893  330193 addons.go:69] Setting dashboard=true in profile "newest-cni-444637"
	I1018 09:18:01.645921  330193 host.go:66] Checking if "newest-cni-444637" exists ...
	I1018 09:18:01.645934  330193 addons.go:238] Setting addon dashboard=true in "newest-cni-444637"
	W1018 09:18:01.645945  330193 addons.go:247] addon dashboard should already be in state true
	I1018 09:18:01.645945  330193 config.go:182] Loaded profile config "newest-cni-444637": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:18:01.645948  330193 addons.go:69] Setting default-storageclass=true in profile "newest-cni-444637"
	I1018 09:18:01.645973  330193 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-444637"
	I1018 09:18:01.645980  330193 host.go:66] Checking if "newest-cni-444637" exists ...
	I1018 09:18:01.646303  330193 cli_runner.go:164] Run: docker container inspect newest-cni-444637 --format={{.State.Status}}
	I1018 09:18:01.646463  330193 cli_runner.go:164] Run: docker container inspect newest-cni-444637 --format={{.State.Status}}
	I1018 09:18:01.646481  330193 cli_runner.go:164] Run: docker container inspect newest-cni-444637 --format={{.State.Status}}
	I1018 09:18:01.647698  330193 out.go:179] * Verifying Kubernetes components...
	I1018 09:18:01.649210  330193 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:18:01.673812  330193 addons.go:238] Setting addon default-storageclass=true in "newest-cni-444637"
	W1018 09:18:01.673837  330193 addons.go:247] addon default-storageclass should already be in state true
	I1018 09:18:01.673877  330193 host.go:66] Checking if "newest-cni-444637" exists ...
	I1018 09:18:01.674375  330193 cli_runner.go:164] Run: docker container inspect newest-cni-444637 --format={{.State.Status}}
	I1018 09:18:01.674516  330193 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:18:01.678901  330193 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:18:01.678924  330193 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 09:18:01.678985  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:18:01.679140  330193 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1018 09:18:01.680475  330193 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1018 09:18:01.681672  330193 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1018 09:18:01.681729  330193 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1018 09:18:01.681827  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:18:01.707736  330193 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 09:18:01.707766  330193 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 09:18:01.707826  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:18:01.713270  330193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/newest-cni-444637/id_rsa Username:docker}
	I1018 09:18:01.719016  330193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/newest-cni-444637/id_rsa Username:docker}
	I1018 09:18:01.734187  330193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/newest-cni-444637/id_rsa Username:docker}
	I1018 09:18:01.812631  330193 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:18:01.828229  330193 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:18:01.828317  330193 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:18:01.829858  330193 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:18:01.835854  330193 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1018 09:18:01.835874  330193 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1018 09:18:01.845491  330193 api_server.go:72] duration metric: took 199.769202ms to wait for apiserver process to appear ...
	I1018 09:18:01.845522  330193 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:18:01.845544  330193 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1018 09:18:01.852363  330193 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 09:18:01.854253  330193 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1018 09:18:01.854275  330193 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1018 09:18:01.872324  330193 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1018 09:18:01.872363  330193 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1018 09:18:01.891549  330193 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1018 09:18:01.891576  330193 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1018 09:18:01.910545  330193 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1018 09:18:01.910574  330193 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1018 09:18:01.928312  330193 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1018 09:18:01.928337  330193 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1018 09:18:01.942869  330193 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1018 09:18:01.942897  330193 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1018 09:18:01.957264  330193 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1018 09:18:01.957287  330193 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1018 09:18:01.971834  330193 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 09:18:01.971871  330193 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1018 09:18:01.988808  330193 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 09:18:03.360064  330193 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1018 09:18:03.360099  330193 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1018 09:18:03.360117  330193 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1018 09:18:03.416525  330193 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W1018 09:18:03.416558  330193 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I1018 09:18:03.845768  330193 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1018 09:18:03.850882  330193 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 09:18:03.850913  330193 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 09:18:03.925688  330193 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.095784279s)
	I1018 09:18:03.925778  330193 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.073377378s)
	I1018 09:18:03.925913  330193 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.937061029s)
	I1018 09:18:03.929127  330193 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-444637 addons enable metrics-server
	
	I1018 09:18:03.937380  330193 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1018 09:18:01.035250  324191 pod_ready.go:104] pod "coredns-66bc5c9577-bpcsk" is not "Ready", error: <nil>
	W1018 09:18:03.035670  324191 pod_ready.go:104] pod "coredns-66bc5c9577-bpcsk" is not "Ready", error: <nil>
	I1018 09:18:03.938934  330193 addons.go:514] duration metric: took 2.293172614s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1018 09:18:04.346493  330193 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1018 09:18:04.351148  330193 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 09:18:04.351178  330193 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 09:18:04.845878  330193 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1018 09:18:04.850252  330193 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1018 09:18:04.851396  330193 api_server.go:141] control plane version: v1.34.1
	I1018 09:18:04.851430  330193 api_server.go:131] duration metric: took 3.005900151s to wait for apiserver health ...
	I1018 09:18:04.851440  330193 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:18:04.855053  330193 system_pods.go:59] 8 kube-system pods found
	I1018 09:18:04.855092  330193 system_pods.go:61] "coredns-66bc5c9577-gc5dd" [7fab8a8d-bdb4-47d4-bf7d-d03341018666] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 09:18:04.855100  330193 system_pods.go:61] "etcd-newest-cni-444637" [b54d61ad-b52d-4343-ba3a-a64b03934319] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 09:18:04.855111  330193 system_pods.go:61] "kindnet-qmlcq" [2c82849a-5511-43a1-a300-a7f46df288ec] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1018 09:18:04.855117  330193 system_pods.go:61] "kube-apiserver-newest-cni-444637" [a9136c1f-8962-45f7-b005-05bd3f856403] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 09:18:04.855124  330193 system_pods.go:61] "kube-controller-manager-newest-cni-444637" [b8d840d7-04c3-495c-aafa-cc8a06e58f06] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 09:18:04.855130  330193 system_pods.go:61] "kube-proxy-hbkn5" [d70417da-43f2-4d8c-a088-07cea5225c34] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1018 09:18:04.855138  330193 system_pods.go:61] "kube-scheduler-newest-cni-444637" [175527c5-4260-4e39-be83-4c36417f3cbe] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 09:18:04.855142  330193 system_pods.go:61] "storage-provisioner" [b0974a78-b6ad-45c3-8241-86f8bb7bc65b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 09:18:04.855151  330193 system_pods.go:74] duration metric: took 3.706424ms to wait for pod list to return data ...
	I1018 09:18:04.855162  330193 default_sa.go:34] waiting for default service account to be created ...
	I1018 09:18:04.857785  330193 default_sa.go:45] found service account: "default"
	I1018 09:18:04.857804  330193 default_sa.go:55] duration metric: took 2.636173ms for default service account to be created ...
	I1018 09:18:04.857817  330193 kubeadm.go:586] duration metric: took 3.212102689s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 09:18:04.857837  330193 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:18:04.860449  330193 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 09:18:04.860472  330193 node_conditions.go:123] node cpu capacity is 8
	I1018 09:18:04.860486  330193 node_conditions.go:105] duration metric: took 2.642504ms to run NodePressure ...
	I1018 09:18:04.860498  330193 start.go:241] waiting for startup goroutines ...
	I1018 09:18:04.860504  330193 start.go:246] waiting for cluster config update ...
	I1018 09:18:04.860514  330193 start.go:255] writing updated cluster config ...
	I1018 09:18:04.860806  330193 ssh_runner.go:195] Run: rm -f paused
	I1018 09:18:04.910604  330193 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 09:18:04.913879  330193 out.go:179] * Done! kubectl is now configured to use "newest-cni-444637" cluster and "default" namespace by default
	W1018 09:18:05.535906  324191 pod_ready.go:104] pod "coredns-66bc5c9577-bpcsk" is not "Ready", error: <nil>
	W1018 09:18:08.034961  324191 pod_ready.go:104] pod "coredns-66bc5c9577-bpcsk" is not "Ready", error: <nil>
	W1018 09:18:10.535644  324191 pod_ready.go:104] pod "coredns-66bc5c9577-bpcsk" is not "Ready", error: <nil>
	W1018 09:18:13.036211  324191 pod_ready.go:104] pod "coredns-66bc5c9577-bpcsk" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 18 09:17:32 embed-certs-880603 crio[561]: time="2025-10-18T09:17:32.013797252Z" level=info msg="Started container" PID=1730 containerID=bae9f43ffcab3eee2b365e02c47153ce88dc92dfd2d460a2d5ec7f7b7fafbac4 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nsd84/dashboard-metrics-scraper id=72ce1277-014f-453a-af8d-c2b4254d84ed name=/runtime.v1.RuntimeService/StartContainer sandboxID=51dd13b3022bc01fa88441f94247ed30c94bc94bc6e33967bdeea83b88017b61
	Oct 18 09:17:32 embed-certs-880603 crio[561]: time="2025-10-18T09:17:32.959559177Z" level=info msg="Removing container: b244aa331b34689be43fddea7fa93ba93e92293ff85f70c8e94be81113f2eb74" id=28a524bb-d0e1-447a-a21e-bc0fe1982fd6 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 09:17:32 embed-certs-880603 crio[561]: time="2025-10-18T09:17:32.969742835Z" level=info msg="Removed container b244aa331b34689be43fddea7fa93ba93e92293ff85f70c8e94be81113f2eb74: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nsd84/dashboard-metrics-scraper" id=28a524bb-d0e1-447a-a21e-bc0fe1982fd6 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 09:17:52 embed-certs-880603 crio[561]: time="2025-10-18T09:17:52.013915148Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=ac29d7d9-69bb-4110-8bc0-2315b2cf2359 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:17:52 embed-certs-880603 crio[561]: time="2025-10-18T09:17:52.015089268Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=cc6edc12-6c8e-4c42-b498-4d9393100031 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:17:52 embed-certs-880603 crio[561]: time="2025-10-18T09:17:52.016315895Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=8fe833e9-71bd-40dc-a9b7-acb96e7cf3e4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:17:52 embed-certs-880603 crio[561]: time="2025-10-18T09:17:52.01661665Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:17:52 embed-certs-880603 crio[561]: time="2025-10-18T09:17:52.021284593Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:17:52 embed-certs-880603 crio[561]: time="2025-10-18T09:17:52.021517294Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/5faf218f6af00f667adb32e7657e838cb3ff017072ff7b2b1fb0ac1e60678a1f/merged/etc/passwd: no such file or directory"
	Oct 18 09:17:52 embed-certs-880603 crio[561]: time="2025-10-18T09:17:52.021545254Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/5faf218f6af00f667adb32e7657e838cb3ff017072ff7b2b1fb0ac1e60678a1f/merged/etc/group: no such file or directory"
	Oct 18 09:17:52 embed-certs-880603 crio[561]: time="2025-10-18T09:17:52.021772826Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:17:52 embed-certs-880603 crio[561]: time="2025-10-18T09:17:52.048224712Z" level=info msg="Created container 8bad8c86d5f84ce4f24ac728a7dc50d9a1b6a8a07e0f88ebe5640d5ce8dd72ef: kube-system/storage-provisioner/storage-provisioner" id=8fe833e9-71bd-40dc-a9b7-acb96e7cf3e4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:17:52 embed-certs-880603 crio[561]: time="2025-10-18T09:17:52.049005071Z" level=info msg="Starting container: 8bad8c86d5f84ce4f24ac728a7dc50d9a1b6a8a07e0f88ebe5640d5ce8dd72ef" id=a573b38d-5817-497a-921a-059a178d4f9c name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:17:52 embed-certs-880603 crio[561]: time="2025-10-18T09:17:52.050766445Z" level=info msg="Started container" PID=1747 containerID=8bad8c86d5f84ce4f24ac728a7dc50d9a1b6a8a07e0f88ebe5640d5ce8dd72ef description=kube-system/storage-provisioner/storage-provisioner id=a573b38d-5817-497a-921a-059a178d4f9c name=/runtime.v1.RuntimeService/StartContainer sandboxID=9436eeeacd09b92d402b2cbbfda96949569eec44e3c00365c39f17dbaa42e2a6
	Oct 18 09:17:52 embed-certs-880603 crio[561]: time="2025-10-18T09:17:52.886315023Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=f4a6fd99-77f9-41c1-95bb-fc51ddd68978 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:17:52 embed-certs-880603 crio[561]: time="2025-10-18T09:17:52.887398136Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8ff266fc-9d9e-4685-9b2f-e1d3a528edd7 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:17:52 embed-certs-880603 crio[561]: time="2025-10-18T09:17:52.888583882Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nsd84/dashboard-metrics-scraper" id=90da2caf-c48e-4766-b2ad-8a4e9469a84f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:17:52 embed-certs-880603 crio[561]: time="2025-10-18T09:17:52.888848206Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:17:52 embed-certs-880603 crio[561]: time="2025-10-18T09:17:52.894925618Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:17:52 embed-certs-880603 crio[561]: time="2025-10-18T09:17:52.895701583Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:17:52 embed-certs-880603 crio[561]: time="2025-10-18T09:17:52.923594343Z" level=info msg="Created container cb9cf0dca6916663ae3bd9589727562f6bce4ea8a5f77c65df3f1a6abcc72b7a: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nsd84/dashboard-metrics-scraper" id=90da2caf-c48e-4766-b2ad-8a4e9469a84f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:17:52 embed-certs-880603 crio[561]: time="2025-10-18T09:17:52.924409047Z" level=info msg="Starting container: cb9cf0dca6916663ae3bd9589727562f6bce4ea8a5f77c65df3f1a6abcc72b7a" id=5645e2f1-deac-4881-9cdd-e703f4db987d name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:17:52 embed-certs-880603 crio[561]: time="2025-10-18T09:17:52.926450204Z" level=info msg="Started container" PID=1763 containerID=cb9cf0dca6916663ae3bd9589727562f6bce4ea8a5f77c65df3f1a6abcc72b7a description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nsd84/dashboard-metrics-scraper id=5645e2f1-deac-4881-9cdd-e703f4db987d name=/runtime.v1.RuntimeService/StartContainer sandboxID=51dd13b3022bc01fa88441f94247ed30c94bc94bc6e33967bdeea83b88017b61
	Oct 18 09:17:53 embed-certs-880603 crio[561]: time="2025-10-18T09:17:53.024757641Z" level=info msg="Removing container: bae9f43ffcab3eee2b365e02c47153ce88dc92dfd2d460a2d5ec7f7b7fafbac4" id=6f7d597e-1845-4097-b82d-87a7f83a011c name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 09:17:53 embed-certs-880603 crio[561]: time="2025-10-18T09:17:53.034978598Z" level=info msg="Removed container bae9f43ffcab3eee2b365e02c47153ce88dc92dfd2d460a2d5ec7f7b7fafbac4: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nsd84/dashboard-metrics-scraper" id=6f7d597e-1845-4097-b82d-87a7f83a011c name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	cb9cf0dca6916       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           23 seconds ago      Exited              dashboard-metrics-scraper   2                   51dd13b3022bc       dashboard-metrics-scraper-6ffb444bf9-nsd84   kubernetes-dashboard
	8bad8c86d5f84       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           24 seconds ago      Running             storage-provisioner         1                   9436eeeacd09b       storage-provisioner                          kube-system
	5206a326b6863       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   47 seconds ago      Running             kubernetes-dashboard        0                   b8014ad19633d       kubernetes-dashboard-855c9754f9-bdrc4        kubernetes-dashboard
	647069fa7dcc1       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           55 seconds ago      Running             busybox                     1                   6ba3442f8723e       busybox                                      default
	68c43d93bcc08       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           55 seconds ago      Running             coredns                     0                   e1d993b2ab509       coredns-66bc5c9577-7fnw7                     kube-system
	29561b719e517       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           55 seconds ago      Exited              storage-provisioner         0                   9436eeeacd09b       storage-provisioner                          kube-system
	7642771a96629       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           55 seconds ago      Running             kindnet-cni                 0                   658333a40c53a       kindnet-wzdm5                                kube-system
	43567ee075073       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           55 seconds ago      Running             kube-proxy                  0                   84577859be70a       kube-proxy-k4kcs                             kube-system
	bb56d20e29836       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           58 seconds ago      Running             kube-apiserver              0                   b3c30d20e1576       kube-apiserver-embed-certs-880603            kube-system
	299c2f3530014       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           58 seconds ago      Running             kube-scheduler              0                   9b865ebb6718a       kube-scheduler-embed-certs-880603            kube-system
	0e0ff398f2a3f       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           58 seconds ago      Running             kube-controller-manager     0                   afa8527d7c9e8       kube-controller-manager-embed-certs-880603   kube-system
	ec50948aa7409       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           58 seconds ago      Running             etcd                        0                   be62eb6127f67       etcd-embed-certs-880603                      kube-system
	
	
	==> coredns [68c43d93bcc08f0db42212289b551dc9b0614da25c6fa8caff073aced341e2bd] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38653 - 17524 "HINFO IN 5232044663050345143.335907963301023360. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.095079108s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-880603
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-880603
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a39cecdc22b5fb611b15c7501c7459c3b4d2820
	                    minikube.k8s.io/name=embed-certs-880603
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T09_15_55_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 09:15:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-880603
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 09:18:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 09:17:51 +0000   Sat, 18 Oct 2025 09:15:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 09:17:51 +0000   Sat, 18 Oct 2025 09:15:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 09:17:51 +0000   Sat, 18 Oct 2025 09:15:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 09:17:51 +0000   Sat, 18 Oct 2025 09:16:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-880603
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                8a50a24b-e651-4f1d-8d2e-12e3c28f7fe8
	  Boot ID:                    e8d7ef1f-87bb-488c-8381-e18fe85b484f
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-7fnw7                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m16s
	  kube-system                 etcd-embed-certs-880603                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m22s
	  kube-system                 kindnet-wzdm5                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m16s
	  kube-system                 kube-apiserver-embed-certs-880603             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-controller-manager-embed-certs-880603    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-proxy-k4kcs                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m16s
	  kube-system                 kube-scheduler-embed-certs-880603             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m16s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-nsd84    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-bdrc4         0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m16s                  kube-proxy       
	  Normal  Starting                 55s                    kube-proxy       
	  Normal  Starting                 2m27s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m27s (x8 over 2m27s)  kubelet          Node embed-certs-880603 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m27s (x8 over 2m27s)  kubelet          Node embed-certs-880603 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m27s (x8 over 2m27s)  kubelet          Node embed-certs-880603 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m22s                  kubelet          Node embed-certs-880603 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m22s                  kubelet          Node embed-certs-880603 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m22s                  kubelet          Node embed-certs-880603 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m22s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m17s                  node-controller  Node embed-certs-880603 event: Registered Node embed-certs-880603 in Controller
	  Normal  NodeReady                95s                    kubelet          Node embed-certs-880603 status is now: NodeReady
	  Normal  Starting                 59s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s (x8 over 59s)      kubelet          Node embed-certs-880603 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x8 over 59s)      kubelet          Node embed-certs-880603 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x8 over 59s)      kubelet          Node embed-certs-880603 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           53s                    node-controller  Node embed-certs-880603 event: Registered Node embed-certs-880603 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3e 73 e0 1e 4f 5c 08 06
	[  +0.001176] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9e 01 6a be c1 ed 08 06
	[  +1.096145] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 92 07 d0 c5 bc 08 06
	[  +0.000393] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff de 8d 0a a3 cc 78 08 06
	[ +17.591772] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 8a 16 36 e8 43 c0 08 06
	[  +0.000413] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3e 73 e0 1e 4f 5c 08 06
	[ +11.820741] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 36 5b 8e 46 ea 47 08 06
	[Oct18 09:15] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 28 86 11 d4 e9 08 06
	[  +0.032974] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 76 2d 83 26 2e 28 08 06
	[  +4.435535] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 22 e2 07 5a 3b 4a 08 06
	[  +0.000382] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 36 5b 8e 46 ea 47 08 06
	[ +43.809014] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 22 6f 4b 2b 7f 46 08 06
	[  +0.000367] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 62 28 86 11 d4 e9 08 06
	
	
	==> etcd [ec50948aa740993206f6c6b998952e98a6c0c34a9993daeba8381c7072181c67] <==
	{"level":"warn","ts":"2025-10-18T09:17:19.526242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:19.532651Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:19.542159Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:19.548852Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:19.555224Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:19.561750Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:19.569295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:19.576245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:19.582771Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:19.589661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:19.596557Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:19.603554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:19.610171Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:19.616941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:19.623166Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:19.629785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:19.636667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:19.647233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:19.655057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:19.661837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:19.669386Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:19.690584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:19.697250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:19.705260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:19.758784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39450","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:18:16 up  1:00,  0 user,  load average: 4.32, 3.77, 2.57
	Linux embed-certs-880603 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7642771a96629ecf015c65966266ce95ca17e3edcd86d6a51e666854ab2ddb6f] <==
	I1018 09:17:21.481115       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 09:17:21.481534       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1018 09:17:21.481721       1 main.go:148] setting mtu 1500 for CNI 
	I1018 09:17:21.481735       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 09:17:21.481766       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T09:17:21Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 09:17:21.684916       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 09:17:21.685034       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 09:17:21.685052       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 09:17:21.686186       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 09:17:22.085411       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 09:17:22.085441       1 metrics.go:72] Registering metrics
	I1018 09:17:22.085512       1 controller.go:711] "Syncing nftables rules"
	I1018 09:17:31.685555       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 09:17:31.685623       1 main.go:301] handling current node
	I1018 09:17:41.685918       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 09:17:41.685973       1 main.go:301] handling current node
	I1018 09:17:51.684949       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 09:17:51.684990       1 main.go:301] handling current node
	I1018 09:18:01.686431       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 09:18:01.686500       1 main.go:301] handling current node
	I1018 09:18:11.691296       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 09:18:11.691327       1 main.go:301] handling current node
	
	
	==> kube-apiserver [bb56d20e298366d034e8ab343121b80816150a01f06f5e3dcf23656917831fa8] <==
	I1018 09:17:20.269046       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1018 09:17:20.269114       1 aggregator.go:171] initial CRD sync complete...
	I1018 09:17:20.269130       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 09:17:20.269136       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 09:17:20.269142       1 cache.go:39] Caches are synced for autoregister controller
	I1018 09:17:20.269266       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1018 09:17:20.269318       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 09:17:20.275808       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 09:17:20.308815       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1018 09:17:20.318253       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1018 09:17:20.318306       1 policy_source.go:240] refreshing policies
	I1018 09:17:20.397633       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 09:17:20.568183       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 09:17:20.607182       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 09:17:20.633579       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 09:17:20.642582       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 09:17:20.650416       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 09:17:20.693280       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.14.148"}
	I1018 09:17:20.704896       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.199.26"}
	I1018 09:17:21.174107       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 09:17:23.906441       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 09:17:24.006754       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 09:17:24.006754       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 09:17:24.206585       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 09:17:24.206584       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [0e0ff398f2a3fe03603d026ff3c4c4aa2a99cc70201bf2049eaef07838ab4ad9] <==
	I1018 09:17:23.574490       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 09:17:23.576824       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 09:17:23.579129       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 09:17:23.581393       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1018 09:17:23.587639       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 09:17:23.587665       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 09:17:23.587675       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 09:17:23.602562       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 09:17:23.602587       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1018 09:17:23.602614       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 09:17:23.602680       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 09:17:23.602811       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 09:17:23.603197       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 09:17:23.603310       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 09:17:23.603723       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1018 09:17:23.605208       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1018 09:17:23.607475       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 09:17:23.607623       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:17:23.609759       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1018 09:17:23.609800       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 09:17:23.609854       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 09:17:23.609897       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 09:17:23.609914       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 09:17:23.609922       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 09:17:23.628490       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [43567ee0750735c42ca3e8a987a5f7de05f91d9cc6c196a312f126a0fb9db347] <==
	I1018 09:17:21.302532       1 server_linux.go:53] "Using iptables proxy"
	I1018 09:17:21.359374       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 09:17:21.460555       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 09:17:21.460600       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1018 09:17:21.460696       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 09:17:21.486232       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 09:17:21.486297       1 server_linux.go:132] "Using iptables Proxier"
	I1018 09:17:21.492960       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 09:17:21.493495       1 server.go:527] "Version info" version="v1.34.1"
	I1018 09:17:21.493583       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:17:21.495100       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 09:17:21.495127       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 09:17:21.495170       1 config.go:200] "Starting service config controller"
	I1018 09:17:21.495176       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 09:17:21.495199       1 config.go:106] "Starting endpoint slice config controller"
	I1018 09:17:21.495229       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 09:17:21.495280       1 config.go:309] "Starting node config controller"
	I1018 09:17:21.495727       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 09:17:21.595785       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 09:17:21.595812       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 09:17:21.595823       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 09:17:21.595851       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [299c2f3530014f0f00b7697904d0bbe7e76037825ca7356cddf1e7e8e4cfbd3f] <==
	I1018 09:17:19.415542       1 serving.go:386] Generated self-signed cert in-memory
	W1018 09:17:20.180956       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 09:17:20.181058       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 09:17:20.181107       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 09:17:20.181136       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 09:17:20.228835       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 09:17:20.228871       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:17:20.235233       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:17:20.235277       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:17:20.236291       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 09:17:20.236376       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 09:17:20.335470       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 09:17:24 embed-certs-880603 kubelet[718]: I1018 09:17:24.218021     718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-889gf\" (UniqueName: \"kubernetes.io/projected/3a382a27-a21c-4ec2-8631-c1534993e7c4-kube-api-access-889gf\") pod \"dashboard-metrics-scraper-6ffb444bf9-nsd84\" (UID: \"3a382a27-a21c-4ec2-8631-c1534993e7c4\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nsd84"
	Oct 18 09:17:24 embed-certs-880603 kubelet[718]: I1018 09:17:24.218045     718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/edf6770a-0607-485d-8eef-aab09553ed76-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-bdrc4\" (UID: \"edf6770a-0607-485d-8eef-aab09553ed76\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-bdrc4"
	Oct 18 09:17:28 embed-certs-880603 kubelet[718]: I1018 09:17:28.256833     718 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 18 09:17:29 embed-certs-880603 kubelet[718]: I1018 09:17:29.971533     718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-bdrc4" podStartSLOduration=1.5264020459999998 podStartE2EDuration="5.971506864s" podCreationTimestamp="2025-10-18 09:17:24 +0000 UTC" firstStartedPulling="2025-10-18 09:17:24.452059025 +0000 UTC m=+6.662591918" lastFinishedPulling="2025-10-18 09:17:28.897163822 +0000 UTC m=+11.107696736" observedRunningTime="2025-10-18 09:17:29.969359381 +0000 UTC m=+12.179892298" watchObservedRunningTime="2025-10-18 09:17:29.971506864 +0000 UTC m=+12.182039778"
	Oct 18 09:17:31 embed-certs-880603 kubelet[718]: I1018 09:17:31.952768     718 scope.go:117] "RemoveContainer" containerID="b244aa331b34689be43fddea7fa93ba93e92293ff85f70c8e94be81113f2eb74"
	Oct 18 09:17:32 embed-certs-880603 kubelet[718]: I1018 09:17:32.957918     718 scope.go:117] "RemoveContainer" containerID="b244aa331b34689be43fddea7fa93ba93e92293ff85f70c8e94be81113f2eb74"
	Oct 18 09:17:32 embed-certs-880603 kubelet[718]: I1018 09:17:32.958071     718 scope.go:117] "RemoveContainer" containerID="bae9f43ffcab3eee2b365e02c47153ce88dc92dfd2d460a2d5ec7f7b7fafbac4"
	Oct 18 09:17:32 embed-certs-880603 kubelet[718]: E1018 09:17:32.958278     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nsd84_kubernetes-dashboard(3a382a27-a21c-4ec2-8631-c1534993e7c4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nsd84" podUID="3a382a27-a21c-4ec2-8631-c1534993e7c4"
	Oct 18 09:17:33 embed-certs-880603 kubelet[718]: I1018 09:17:33.963882     718 scope.go:117] "RemoveContainer" containerID="bae9f43ffcab3eee2b365e02c47153ce88dc92dfd2d460a2d5ec7f7b7fafbac4"
	Oct 18 09:17:33 embed-certs-880603 kubelet[718]: E1018 09:17:33.964045     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nsd84_kubernetes-dashboard(3a382a27-a21c-4ec2-8631-c1534993e7c4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nsd84" podUID="3a382a27-a21c-4ec2-8631-c1534993e7c4"
	Oct 18 09:17:39 embed-certs-880603 kubelet[718]: I1018 09:17:39.320781     718 scope.go:117] "RemoveContainer" containerID="bae9f43ffcab3eee2b365e02c47153ce88dc92dfd2d460a2d5ec7f7b7fafbac4"
	Oct 18 09:17:39 embed-certs-880603 kubelet[718]: E1018 09:17:39.321128     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nsd84_kubernetes-dashboard(3a382a27-a21c-4ec2-8631-c1534993e7c4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nsd84" podUID="3a382a27-a21c-4ec2-8631-c1534993e7c4"
	Oct 18 09:17:52 embed-certs-880603 kubelet[718]: I1018 09:17:52.013442     718 scope.go:117] "RemoveContainer" containerID="29561b719e51746ae7b9206a1fb65330b21daa2b29035245490abc4a2b5912d8"
	Oct 18 09:17:52 embed-certs-880603 kubelet[718]: I1018 09:17:52.885739     718 scope.go:117] "RemoveContainer" containerID="bae9f43ffcab3eee2b365e02c47153ce88dc92dfd2d460a2d5ec7f7b7fafbac4"
	Oct 18 09:17:53 embed-certs-880603 kubelet[718]: I1018 09:17:53.023462     718 scope.go:117] "RemoveContainer" containerID="bae9f43ffcab3eee2b365e02c47153ce88dc92dfd2d460a2d5ec7f7b7fafbac4"
	Oct 18 09:17:53 embed-certs-880603 kubelet[718]: I1018 09:17:53.023670     718 scope.go:117] "RemoveContainer" containerID="cb9cf0dca6916663ae3bd9589727562f6bce4ea8a5f77c65df3f1a6abcc72b7a"
	Oct 18 09:17:53 embed-certs-880603 kubelet[718]: E1018 09:17:53.023851     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nsd84_kubernetes-dashboard(3a382a27-a21c-4ec2-8631-c1534993e7c4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nsd84" podUID="3a382a27-a21c-4ec2-8631-c1534993e7c4"
	Oct 18 09:17:59 embed-certs-880603 kubelet[718]: I1018 09:17:59.319239     718 scope.go:117] "RemoveContainer" containerID="cb9cf0dca6916663ae3bd9589727562f6bce4ea8a5f77c65df3f1a6abcc72b7a"
	Oct 18 09:17:59 embed-certs-880603 kubelet[718]: E1018 09:17:59.319511     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nsd84_kubernetes-dashboard(3a382a27-a21c-4ec2-8631-c1534993e7c4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nsd84" podUID="3a382a27-a21c-4ec2-8631-c1534993e7c4"
	Oct 18 09:18:11 embed-certs-880603 kubelet[718]: I1018 09:18:11.886128     718 scope.go:117] "RemoveContainer" containerID="cb9cf0dca6916663ae3bd9589727562f6bce4ea8a5f77c65df3f1a6abcc72b7a"
	Oct 18 09:18:11 embed-certs-880603 kubelet[718]: E1018 09:18:11.886363     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nsd84_kubernetes-dashboard(3a382a27-a21c-4ec2-8631-c1534993e7c4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nsd84" podUID="3a382a27-a21c-4ec2-8631-c1534993e7c4"
	Oct 18 09:18:11 embed-certs-880603 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 09:18:11 embed-certs-880603 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 09:18:11 embed-certs-880603 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 18 09:18:11 embed-certs-880603 systemd[1]: kubelet.service: Consumed 1.847s CPU time.
	
	
	==> kubernetes-dashboard [5206a326b6863dc499d944fee0a747773134d23171f6cea0eed24802ac4170f1] <==
	2025/10/18 09:17:28 Starting overwatch
	2025/10/18 09:17:28 Using namespace: kubernetes-dashboard
	2025/10/18 09:17:28 Using in-cluster config to connect to apiserver
	2025/10/18 09:17:28 Using secret token for csrf signing
	2025/10/18 09:17:28 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 09:17:28 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 09:17:28 Successful initial request to the apiserver, version: v1.34.1
	2025/10/18 09:17:28 Generating JWE encryption key
	2025/10/18 09:17:28 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 09:17:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 09:17:29 Initializing JWE encryption key from synchronized object
	2025/10/18 09:17:29 Creating in-cluster Sidecar client
	2025/10/18 09:17:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 09:17:29 Serving insecurely on HTTP port: 9090
	2025/10/18 09:17:59 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [29561b719e51746ae7b9206a1fb65330b21daa2b29035245490abc4a2b5912d8] <==
	I1018 09:17:21.264125       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 09:17:51.267756       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [8bad8c86d5f84ce4f24ac728a7dc50d9a1b6a8a07e0f88ebe5640d5ce8dd72ef] <==
	I1018 09:17:52.064277       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 09:17:52.071428       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 09:17:52.071488       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 09:17:52.073962       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:55.529535       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:59.793622       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:18:03.394008       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:18:06.448551       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:18:09.470842       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:18:09.475681       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 09:18:09.475865       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 09:18:09.475999       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5e8e0df5-6f93-48eb-99a3-eaa105313a85", APIVersion:"v1", ResourceVersion:"669", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-880603_eaa60fd9-6c0a-4297-b651-835586687f82 became leader
	I1018 09:18:09.476020       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-880603_eaa60fd9-6c0a-4297-b651-835586687f82!
	W1018 09:18:09.478921       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:18:09.482576       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 09:18:09.577358       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-880603_eaa60fd9-6c0a-4297-b651-835586687f82!
	W1018 09:18:11.486313       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:18:11.491337       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:18:13.494710       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:18:13.502810       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:18:15.506268       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:18:15.510393       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-880603 -n embed-certs-880603
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-880603 -n embed-certs-880603: exit status 2 (328.944862ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-880603 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (6.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (5.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-986220 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-986220 --alsologtostderr -v=1: exit status 80 (1.794730382s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-986220 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:18:32.168596  336689 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:18:32.169056  336689 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:18:32.169066  336689 out.go:374] Setting ErrFile to fd 2...
	I1018 09:18:32.169070  336689 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:18:32.169276  336689 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-5897/.minikube/bin
	I1018 09:18:32.169560  336689 out.go:368] Setting JSON to false
	I1018 09:18:32.169597  336689 mustload.go:65] Loading cluster: default-k8s-diff-port-986220
	I1018 09:18:32.169940  336689 config.go:182] Loaded profile config "default-k8s-diff-port-986220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:18:32.170316  336689 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-986220 --format={{.State.Status}}
	I1018 09:18:32.189290  336689 host.go:66] Checking if "default-k8s-diff-port-986220" exists ...
	I1018 09:18:32.189596  336689 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:18:32.250905  336689 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-18 09:18:32.240653798 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:18:32.251585  336689 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-986220 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1018 09:18:32.254587  336689 out.go:179] * Pausing node default-k8s-diff-port-986220 ... 
	I1018 09:18:32.256113  336689 host.go:66] Checking if "default-k8s-diff-port-986220" exists ...
	I1018 09:18:32.256459  336689 ssh_runner.go:195] Run: systemctl --version
	I1018 09:18:32.256506  336689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-986220
	I1018 09:18:32.275647  336689 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/default-k8s-diff-port-986220/id_rsa Username:docker}
	I1018 09:18:32.371517  336689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:18:32.398228  336689 pause.go:52] kubelet running: true
	I1018 09:18:32.398301  336689 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 09:18:32.562319  336689 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 09:18:32.562463  336689 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 09:18:32.633248  336689 cri.go:89] found id: "49ce691fa7cdd9b79bec964c1afdc4a8c154310a7c6ea44e93cd0d76e7d23447"
	I1018 09:18:32.633274  336689 cri.go:89] found id: "514c449ade6a78cd215a5ddfcf373f35a48b107fc90ec5014b5ea1fcf64cfc79"
	I1018 09:18:32.633280  336689 cri.go:89] found id: "4d5cc19ffee186783c97a10c5f2a7ef492399eb6f8acbfa30d889652dbfdcd2b"
	I1018 09:18:32.633285  336689 cri.go:89] found id: "3e2529aa2dd60af7f9c954b73b314b5ec999e7f5a4e0b8dd5a9e4f8b4143a321"
	I1018 09:18:32.633290  336689 cri.go:89] found id: "efcf153f5528d91cf81fb7b54240b482e4822aa80a11aa28014d0e8723503d50"
	I1018 09:18:32.633295  336689 cri.go:89] found id: "8956123c1313708cc585f6ee981938531d1fde0ef837a5cdbf5b02ab1fb0c549"
	I1018 09:18:32.633300  336689 cri.go:89] found id: "8d1ab9fe3eb84ef483a99bbfe79d01dfa34dfdff518ca313e3c2299c6723b35e"
	I1018 09:18:32.633303  336689 cri.go:89] found id: "1dc67601595acad3b95b404bf690768d89426dc4a4256db06ee931235af514af"
	I1018 09:18:32.633305  336689 cri.go:89] found id: "bad27ff83c63687be534ccd3f079002f13a4d8cf081095fd1e212a53f3010fbf"
	I1018 09:18:32.633310  336689 cri.go:89] found id: "327aee2aa78ae38951a7e22143f154bb2a0f00c5ea96263259918fb6abc5b2db"
	I1018 09:18:32.633312  336689 cri.go:89] found id: "dd2f1f47e902c1cbe5cb90ca529db2c31f57a6d6f5fdebcf2ed75577b59a049b"
	I1018 09:18:32.633315  336689 cri.go:89] found id: ""
	I1018 09:18:32.633375  336689 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:18:32.645452  336689 retry.go:31] will retry after 340.337924ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:18:32Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:18:32.986023  336689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:18:32.999701  336689 pause.go:52] kubelet running: false
	I1018 09:18:32.999771  336689 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 09:18:33.141974  336689 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 09:18:33.142068  336689 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 09:18:33.210071  336689 cri.go:89] found id: "49ce691fa7cdd9b79bec964c1afdc4a8c154310a7c6ea44e93cd0d76e7d23447"
	I1018 09:18:33.210093  336689 cri.go:89] found id: "514c449ade6a78cd215a5ddfcf373f35a48b107fc90ec5014b5ea1fcf64cfc79"
	I1018 09:18:33.210097  336689 cri.go:89] found id: "4d5cc19ffee186783c97a10c5f2a7ef492399eb6f8acbfa30d889652dbfdcd2b"
	I1018 09:18:33.210100  336689 cri.go:89] found id: "3e2529aa2dd60af7f9c954b73b314b5ec999e7f5a4e0b8dd5a9e4f8b4143a321"
	I1018 09:18:33.210103  336689 cri.go:89] found id: "efcf153f5528d91cf81fb7b54240b482e4822aa80a11aa28014d0e8723503d50"
	I1018 09:18:33.210106  336689 cri.go:89] found id: "8956123c1313708cc585f6ee981938531d1fde0ef837a5cdbf5b02ab1fb0c549"
	I1018 09:18:33.210109  336689 cri.go:89] found id: "8d1ab9fe3eb84ef483a99bbfe79d01dfa34dfdff518ca313e3c2299c6723b35e"
	I1018 09:18:33.210111  336689 cri.go:89] found id: "1dc67601595acad3b95b404bf690768d89426dc4a4256db06ee931235af514af"
	I1018 09:18:33.210114  336689 cri.go:89] found id: "bad27ff83c63687be534ccd3f079002f13a4d8cf081095fd1e212a53f3010fbf"
	I1018 09:18:33.210129  336689 cri.go:89] found id: "327aee2aa78ae38951a7e22143f154bb2a0f00c5ea96263259918fb6abc5b2db"
	I1018 09:18:33.210134  336689 cri.go:89] found id: "dd2f1f47e902c1cbe5cb90ca529db2c31f57a6d6f5fdebcf2ed75577b59a049b"
	I1018 09:18:33.210138  336689 cri.go:89] found id: ""
	I1018 09:18:33.210182  336689 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:18:33.222237  336689 retry.go:31] will retry after 440.622848ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:18:33Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:18:33.663945  336689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:18:33.677605  336689 pause.go:52] kubelet running: false
	I1018 09:18:33.677721  336689 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 09:18:33.824735  336689 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 09:18:33.824818  336689 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 09:18:33.891488  336689 cri.go:89] found id: "49ce691fa7cdd9b79bec964c1afdc4a8c154310a7c6ea44e93cd0d76e7d23447"
	I1018 09:18:33.891517  336689 cri.go:89] found id: "514c449ade6a78cd215a5ddfcf373f35a48b107fc90ec5014b5ea1fcf64cfc79"
	I1018 09:18:33.891534  336689 cri.go:89] found id: "4d5cc19ffee186783c97a10c5f2a7ef492399eb6f8acbfa30d889652dbfdcd2b"
	I1018 09:18:33.891541  336689 cri.go:89] found id: "3e2529aa2dd60af7f9c954b73b314b5ec999e7f5a4e0b8dd5a9e4f8b4143a321"
	I1018 09:18:33.891546  336689 cri.go:89] found id: "efcf153f5528d91cf81fb7b54240b482e4822aa80a11aa28014d0e8723503d50"
	I1018 09:18:33.891551  336689 cri.go:89] found id: "8956123c1313708cc585f6ee981938531d1fde0ef837a5cdbf5b02ab1fb0c549"
	I1018 09:18:33.891555  336689 cri.go:89] found id: "8d1ab9fe3eb84ef483a99bbfe79d01dfa34dfdff518ca313e3c2299c6723b35e"
	I1018 09:18:33.891574  336689 cri.go:89] found id: "1dc67601595acad3b95b404bf690768d89426dc4a4256db06ee931235af514af"
	I1018 09:18:33.891582  336689 cri.go:89] found id: "bad27ff83c63687be534ccd3f079002f13a4d8cf081095fd1e212a53f3010fbf"
	I1018 09:18:33.891590  336689 cri.go:89] found id: "327aee2aa78ae38951a7e22143f154bb2a0f00c5ea96263259918fb6abc5b2db"
	I1018 09:18:33.891597  336689 cri.go:89] found id: "dd2f1f47e902c1cbe5cb90ca529db2c31f57a6d6f5fdebcf2ed75577b59a049b"
	I1018 09:18:33.891602  336689 cri.go:89] found id: ""
	I1018 09:18:33.891648  336689 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:18:33.906146  336689 out.go:203] 
	W1018 09:18:33.907728  336689 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:18:33Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:18:33Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 09:18:33.907748  336689 out.go:285] * 
	* 
	W1018 09:18:33.911716  336689 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 09:18:33.913163  336689 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-986220 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-986220
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-986220:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "48881c0b9d8337dba348ebf21ea33e5939947c73c9cc7a2773507be18d3ba575",
	        "Created": "2025-10-18T09:16:19.86673265Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 324454,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T09:17:29.591929444Z",
	            "FinishedAt": "2025-10-18T09:17:27.758633883Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/48881c0b9d8337dba348ebf21ea33e5939947c73c9cc7a2773507be18d3ba575/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/48881c0b9d8337dba348ebf21ea33e5939947c73c9cc7a2773507be18d3ba575/hostname",
	        "HostsPath": "/var/lib/docker/containers/48881c0b9d8337dba348ebf21ea33e5939947c73c9cc7a2773507be18d3ba575/hosts",
	        "LogPath": "/var/lib/docker/containers/48881c0b9d8337dba348ebf21ea33e5939947c73c9cc7a2773507be18d3ba575/48881c0b9d8337dba348ebf21ea33e5939947c73c9cc7a2773507be18d3ba575-json.log",
	        "Name": "/default-k8s-diff-port-986220",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-986220:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-986220",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "48881c0b9d8337dba348ebf21ea33e5939947c73c9cc7a2773507be18d3ba575",
	                "LowerDir": "/var/lib/docker/overlay2/ebca512855d8718cc57a74f0c5a7cb78a8d4717430e6e9b0fbcfa814a3464016-init/diff:/var/lib/docker/overlay2/76f783f469ac4c930bc111d7df4bd2b3a57bdcd762971c7ce0ba7a7b959771a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ebca512855d8718cc57a74f0c5a7cb78a8d4717430e6e9b0fbcfa814a3464016/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ebca512855d8718cc57a74f0c5a7cb78a8d4717430e6e9b0fbcfa814a3464016/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ebca512855d8718cc57a74f0c5a7cb78a8d4717430e6e9b0fbcfa814a3464016/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-986220",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-986220/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-986220",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-986220",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-986220",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "576063fdfdc15bd6e7ea20ccdd827695dae890ec9309272560e5870ffda77da3",
	            "SandboxKey": "/var/run/docker/netns/576063fdfdc1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-986220": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "76:a8:ba:84:d6:0f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ef55982bb9e9da39f5725d618404d1c9094984213effce96590128a5ebc25231",
	                    "EndpointID": "c5014c4620214713400ed8439671c20f436fc0c72a973d742500dd4cd1e3ef7b",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-986220",
	                        "48881c0b9d83"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-986220 -n default-k8s-diff-port-986220
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-986220 -n default-k8s-diff-port-986220: exit status 2 (313.230014ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-986220 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-986220 logs -n 25: (1.101856553s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p newest-cni-444637 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-444637            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-986220 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-986220 │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-880603 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-880603           │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ start   │ -p embed-certs-880603 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-880603           │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ stop    │ -p default-k8s-diff-port-986220 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-986220 │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ image   │ no-preload-031066 image list --format=json                                                                                                                                                                                                    │ no-preload-031066            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ pause   │ -p no-preload-031066 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-031066            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-986220 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-986220 │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ start   │ -p default-k8s-diff-port-986220 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-986220 │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:18 UTC │
	│ delete  │ -p no-preload-031066                                                                                                                                                                                                                          │ no-preload-031066            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ delete  │ -p no-preload-031066                                                                                                                                                                                                                          │ no-preload-031066            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ addons  │ enable metrics-server -p newest-cni-444637 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-444637            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │                     │
	│ stop    │ -p newest-cni-444637 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-444637            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ addons  │ enable dashboard -p newest-cni-444637 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-444637            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ start   │ -p newest-cni-444637 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-444637            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:18 UTC │
	│ image   │ newest-cni-444637 image list --format=json                                                                                                                                                                                                    │ newest-cni-444637            │ jenkins │ v1.37.0 │ 18 Oct 25 09:18 UTC │ 18 Oct 25 09:18 UTC │
	│ pause   │ -p newest-cni-444637 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-444637            │ jenkins │ v1.37.0 │ 18 Oct 25 09:18 UTC │                     │
	│ image   │ embed-certs-880603 image list --format=json                                                                                                                                                                                                   │ embed-certs-880603           │ jenkins │ v1.37.0 │ 18 Oct 25 09:18 UTC │ 18 Oct 25 09:18 UTC │
	│ pause   │ -p embed-certs-880603 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-880603           │ jenkins │ v1.37.0 │ 18 Oct 25 09:18 UTC │                     │
	│ delete  │ -p newest-cni-444637                                                                                                                                                                                                                          │ newest-cni-444637            │ jenkins │ v1.37.0 │ 18 Oct 25 09:18 UTC │ 18 Oct 25 09:18 UTC │
	│ delete  │ -p newest-cni-444637                                                                                                                                                                                                                          │ newest-cni-444637            │ jenkins │ v1.37.0 │ 18 Oct 25 09:18 UTC │ 18 Oct 25 09:18 UTC │
	│ delete  │ -p embed-certs-880603                                                                                                                                                                                                                         │ embed-certs-880603           │ jenkins │ v1.37.0 │ 18 Oct 25 09:18 UTC │ 18 Oct 25 09:18 UTC │
	│ delete  │ -p embed-certs-880603                                                                                                                                                                                                                         │ embed-certs-880603           │ jenkins │ v1.37.0 │ 18 Oct 25 09:18 UTC │ 18 Oct 25 09:18 UTC │
	│ image   │ default-k8s-diff-port-986220 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-986220 │ jenkins │ v1.37.0 │ 18 Oct 25 09:18 UTC │ 18 Oct 25 09:18 UTC │
	│ pause   │ -p default-k8s-diff-port-986220 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-986220 │ jenkins │ v1.37.0 │ 18 Oct 25 09:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 09:17:54
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 09:17:54.427005  330193 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:17:54.427270  330193 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:17:54.427281  330193 out.go:374] Setting ErrFile to fd 2...
	I1018 09:17:54.427287  330193 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:17:54.427525  330193 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-5897/.minikube/bin
	I1018 09:17:54.428050  330193 out.go:368] Setting JSON to false
	I1018 09:17:54.429280  330193 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3622,"bootTime":1760775452,"procs":312,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 09:17:54.429387  330193 start.go:141] virtualization: kvm guest
	I1018 09:17:54.431635  330193 out.go:179] * [newest-cni-444637] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 09:17:54.432952  330193 notify.go:220] Checking for updates...
	I1018 09:17:54.432979  330193 out.go:179]   - MINIKUBE_LOCATION=21767
	I1018 09:17:54.434488  330193 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:17:54.435897  330193 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-5897/kubeconfig
	I1018 09:17:54.437111  330193 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-5897/.minikube
	I1018 09:17:54.438264  330193 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 09:17:54.439545  330193 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:17:54.441204  330193 config.go:182] Loaded profile config "newest-cni-444637": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:17:54.441727  330193 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:17:54.467746  330193 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 09:17:54.467827  330193 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:17:54.527403  330193 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:66 SystemTime:2025-10-18 09:17:54.515566485 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:17:54.527559  330193 docker.go:318] overlay module found
	I1018 09:17:54.529436  330193 out.go:179] * Using the docker driver based on existing profile
	I1018 09:17:54.530557  330193 start.go:305] selected driver: docker
	I1018 09:17:54.530578  330193 start.go:925] validating driver "docker" against &{Name:newest-cni-444637 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-444637 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:17:54.530680  330193 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:17:54.531357  330193 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:17:54.591156  330193 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:66 SystemTime:2025-10-18 09:17:54.580755477 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:17:54.591532  330193 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 09:17:54.591566  330193 cni.go:84] Creating CNI manager for ""
	I1018 09:17:54.591617  330193 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:17:54.591683  330193 start.go:349] cluster config:
	{Name:newest-cni-444637 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-444637 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:17:54.593449  330193 out.go:179] * Starting "newest-cni-444637" primary control-plane node in "newest-cni-444637" cluster
	I1018 09:17:54.594724  330193 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 09:17:54.596122  330193 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 09:17:54.597292  330193 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:17:54.597335  330193 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-5897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 09:17:54.597376  330193 cache.go:58] Caching tarball of preloaded images
	I1018 09:17:54.597366  330193 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 09:17:54.597499  330193 preload.go:233] Found /home/jenkins/minikube-integration/21767-5897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 09:17:54.597519  330193 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 09:17:54.597628  330193 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/config.json ...
	I1018 09:17:54.619906  330193 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 09:17:54.619924  330193 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 09:17:54.619939  330193 cache.go:232] Successfully downloaded all kic artifacts
	I1018 09:17:54.619961  330193 start.go:360] acquireMachinesLock for newest-cni-444637: {Name:mkf6974ca6fc7b22cdf212b383f50d3f090ea59b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:17:54.620020  330193 start.go:364] duration metric: took 43.166µs to acquireMachinesLock for "newest-cni-444637"
	I1018 09:17:54.620037  330193 start.go:96] Skipping create...Using existing machine configuration
	I1018 09:17:54.620042  330193 fix.go:54] fixHost starting: 
	I1018 09:17:54.620234  330193 cli_runner.go:164] Run: docker container inspect newest-cni-444637 --format={{.State.Status}}
	I1018 09:17:54.638627  330193 fix.go:112] recreateIfNeeded on newest-cni-444637: state=Stopped err=<nil>
	W1018 09:17:54.638652  330193 fix.go:138] unexpected machine state, will restart: <nil>
	W1018 09:17:51.833553  318609 pod_ready.go:104] pod "coredns-66bc5c9577-7fnw7" is not "Ready", error: <nil>
	W1018 09:17:53.833757  318609 pod_ready.go:104] pod "coredns-66bc5c9577-7fnw7" is not "Ready", error: <nil>
	W1018 09:17:56.034833  324191 pod_ready.go:104] pod "coredns-66bc5c9577-bpcsk" is not "Ready", error: <nil>
	W1018 09:17:58.534991  324191 pod_ready.go:104] pod "coredns-66bc5c9577-bpcsk" is not "Ready", error: <nil>
	I1018 09:17:54.640543  330193 out.go:252] * Restarting existing docker container for "newest-cni-444637" ...
	I1018 09:17:54.640644  330193 cli_runner.go:164] Run: docker start newest-cni-444637
	I1018 09:17:54.903916  330193 cli_runner.go:164] Run: docker container inspect newest-cni-444637 --format={{.State.Status}}
	I1018 09:17:54.923445  330193 kic.go:430] container "newest-cni-444637" state is running.
	I1018 09:17:54.923919  330193 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-444637
	I1018 09:17:54.944878  330193 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/config.json ...
	I1018 09:17:54.945143  330193 machine.go:93] provisionDockerMachine start ...
	I1018 09:17:54.945221  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:17:54.965135  330193 main.go:141] libmachine: Using SSH client type: native
	I1018 09:17:54.965422  330193 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1018 09:17:54.965438  330193 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:17:54.966008  330193 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59674->127.0.0.1:33133: read: connection reset by peer
	I1018 09:17:58.102821  330193 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-444637
	
	I1018 09:17:58.102846  330193 ubuntu.go:182] provisioning hostname "newest-cni-444637"
	I1018 09:17:58.102902  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:17:58.121992  330193 main.go:141] libmachine: Using SSH client type: native
	I1018 09:17:58.122251  330193 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1018 09:17:58.122274  330193 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-444637 && echo "newest-cni-444637" | sudo tee /etc/hostname
	I1018 09:17:58.271611  330193 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-444637
	
	I1018 09:17:58.271696  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:17:58.295116  330193 main.go:141] libmachine: Using SSH client type: native
	I1018 09:17:58.295331  330193 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1018 09:17:58.295366  330193 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-444637' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-444637/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-444637' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:17:58.435338  330193 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:17:58.435406  330193 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-5897/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-5897/.minikube}
	I1018 09:17:58.435457  330193 ubuntu.go:190] setting up certificates
	I1018 09:17:58.435470  330193 provision.go:84] configureAuth start
	I1018 09:17:58.435550  330193 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-444637
	I1018 09:17:58.454683  330193 provision.go:143] copyHostCerts
	I1018 09:17:58.454758  330193 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem, removing ...
	I1018 09:17:58.454789  330193 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem
	I1018 09:17:58.454878  330193 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem (1675 bytes)
	I1018 09:17:58.455021  330193 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem, removing ...
	I1018 09:17:58.455032  330193 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem
	I1018 09:17:58.455077  330193 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem (1078 bytes)
	I1018 09:17:58.455176  330193 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem, removing ...
	I1018 09:17:58.455185  330193 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem
	I1018 09:17:58.455229  330193 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem (1123 bytes)
	I1018 09:17:58.455323  330193 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem org=jenkins.newest-cni-444637 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-444637]
	I1018 09:17:58.651717  330193 provision.go:177] copyRemoteCerts
	I1018 09:17:58.651791  330193 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:17:58.651850  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:17:58.670990  330193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/newest-cni-444637/id_rsa Username:docker}
	I1018 09:17:58.769295  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:17:58.788403  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 09:17:58.807495  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 09:17:58.826308  330193 provision.go:87] duration metric: took 390.822036ms to configureAuth
	I1018 09:17:58.826335  330193 ubuntu.go:206] setting minikube options for container-runtime
	I1018 09:17:58.826534  330193 config.go:182] Loaded profile config "newest-cni-444637": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:17:58.826624  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:17:58.845940  330193 main.go:141] libmachine: Using SSH client type: native
	I1018 09:17:58.846169  330193 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1018 09:17:58.846191  330193 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:17:59.117215  330193 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:17:59.117238  330193 machine.go:96] duration metric: took 4.172078969s to provisionDockerMachine
	I1018 09:17:59.117253  330193 start.go:293] postStartSetup for "newest-cni-444637" (driver="docker")
	I1018 09:17:59.117266  330193 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:17:59.117338  330193 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:17:59.117401  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:17:59.136996  330193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/newest-cni-444637/id_rsa Username:docker}
	I1018 09:17:59.235549  330193 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:17:59.239452  330193 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 09:17:59.239483  330193 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 09:17:59.239505  330193 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-5897/.minikube/addons for local assets ...
	I1018 09:17:59.239563  330193 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-5897/.minikube/files for local assets ...
	I1018 09:17:59.239658  330193 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem -> 93942.pem in /etc/ssl/certs
	I1018 09:17:59.239788  330193 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:17:59.248379  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem --> /etc/ssl/certs/93942.pem (1708 bytes)
	I1018 09:17:59.268012  330193 start.go:296] duration metric: took 150.737252ms for postStartSetup
	I1018 09:17:59.268099  330193 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:17:59.268146  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:17:59.287401  330193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/newest-cni-444637/id_rsa Username:docker}
	I1018 09:17:59.382795  330193 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 09:17:59.388305  330193 fix.go:56] duration metric: took 4.768253133s for fixHost
	I1018 09:17:59.388338  330193 start.go:83] releasing machines lock for "newest-cni-444637", held for 4.76830641s
	I1018 09:17:59.388481  330193 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-444637
	I1018 09:17:59.407756  330193 ssh_runner.go:195] Run: cat /version.json
	I1018 09:17:59.407798  330193 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:17:59.407876  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:17:59.407803  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	W1018 09:17:56.333478  318609 pod_ready.go:104] pod "coredns-66bc5c9577-7fnw7" is not "Ready", error: <nil>
	I1018 09:17:58.333556  318609 pod_ready.go:94] pod "coredns-66bc5c9577-7fnw7" is "Ready"
	I1018 09:17:58.333585  318609 pod_ready.go:86] duration metric: took 36.506179321s for pod "coredns-66bc5c9577-7fnw7" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:58.336410  318609 pod_ready.go:83] waiting for pod "etcd-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:58.341932  318609 pod_ready.go:94] pod "etcd-embed-certs-880603" is "Ready"
	I1018 09:17:58.341964  318609 pod_ready.go:86] duration metric: took 5.525225ms for pod "etcd-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:58.344669  318609 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:58.349852  318609 pod_ready.go:94] pod "kube-apiserver-embed-certs-880603" is "Ready"
	I1018 09:17:58.349882  318609 pod_ready.go:86] duration metric: took 5.170321ms for pod "kube-apiserver-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:58.352067  318609 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:58.532002  318609 pod_ready.go:94] pod "kube-controller-manager-embed-certs-880603" is "Ready"
	I1018 09:17:58.532034  318609 pod_ready.go:86] duration metric: took 179.946406ms for pod "kube-controller-manager-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:58.732243  318609 pod_ready.go:83] waiting for pod "kube-proxy-k4kcs" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:59.131632  318609 pod_ready.go:94] pod "kube-proxy-k4kcs" is "Ready"
	I1018 09:17:59.131665  318609 pod_ready.go:86] duration metric: took 399.394452ms for pod "kube-proxy-k4kcs" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:59.332088  318609 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:59.734805  318609 pod_ready.go:94] pod "kube-scheduler-embed-certs-880603" is "Ready"
	I1018 09:17:59.734842  318609 pod_ready.go:86] duration metric: took 402.724813ms for pod "kube-scheduler-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:59.734856  318609 pod_ready.go:40] duration metric: took 37.912005765s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:17:59.783224  318609 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 09:17:59.785136  318609 out.go:179] * Done! kubectl is now configured to use "embed-certs-880603" cluster and "default" namespace by default
	I1018 09:17:59.428145  330193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/newest-cni-444637/id_rsa Username:docker}
	I1018 09:17:59.430455  330193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/newest-cni-444637/id_rsa Username:docker}
	I1018 09:17:59.580030  330193 ssh_runner.go:195] Run: systemctl --version
	I1018 09:17:59.587085  330193 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:17:59.625510  330193 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:17:59.630784  330193 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:17:59.630846  330193 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:17:59.639622  330193 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 09:17:59.639650  330193 start.go:495] detecting cgroup driver to use...
	I1018 09:17:59.639695  330193 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 09:17:59.639752  330193 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:17:59.654825  330193 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:17:59.668280  330193 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:17:59.668366  330193 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:17:59.683973  330193 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:17:59.698385  330193 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:17:59.790586  330193 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:17:59.892076  330193 docker.go:234] disabling docker service ...
	I1018 09:17:59.892147  330193 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:17:59.908881  330193 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:17:59.922861  330193 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:18:00.012767  330193 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:18:00.112051  330193 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:18:00.125686  330193 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:18:00.142184  330193 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 09:18:00.142248  330193 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:18:00.153446  330193 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 09:18:00.153510  330193 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:18:00.163772  330193 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:18:00.173529  330193 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:18:00.183180  330193 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:18:00.192357  330193 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:18:00.202160  330193 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:18:00.211313  330193 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:18:00.221003  330193 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:18:00.229269  330193 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:18:00.238137  330193 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:18:00.320620  330193 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:18:00.435033  330193 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:18:00.435106  330193 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:18:00.439539  330193 start.go:563] Will wait 60s for crictl version
	I1018 09:18:00.439606  330193 ssh_runner.go:195] Run: which crictl
	I1018 09:18:00.443682  330193 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 09:18:00.469987  330193 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 09:18:00.470070  330193 ssh_runner.go:195] Run: crio --version
	I1018 09:18:00.500186  330193 ssh_runner.go:195] Run: crio --version
	I1018 09:18:00.531772  330193 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 09:18:00.533155  330193 cli_runner.go:164] Run: docker network inspect newest-cni-444637 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:18:00.552284  330193 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1018 09:18:00.556833  330193 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:18:00.569469  330193 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1018 09:18:00.570643  330193 kubeadm.go:883] updating cluster {Name:newest-cni-444637 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-444637 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:18:00.570761  330193 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:18:00.570826  330193 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:18:00.604611  330193 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:18:00.604633  330193 crio.go:433] Images already preloaded, skipping extraction
	I1018 09:18:00.604679  330193 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:18:00.632395  330193 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:18:00.632438  330193 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:18:00.632446  330193 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1018 09:18:00.632555  330193 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-444637 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-444637 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:18:00.632630  330193 ssh_runner.go:195] Run: crio config
	I1018 09:18:00.683711  330193 cni.go:84] Creating CNI manager for ""
	I1018 09:18:00.683732  330193 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:18:00.683746  330193 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1018 09:18:00.683770  330193 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-444637 NodeName:newest-cni-444637 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:18:00.683897  330193 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-444637"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:18:00.683961  330193 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 09:18:00.693538  330193 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:18:00.693611  330193 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:18:00.701785  330193 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1018 09:18:00.715623  330193 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:18:00.729315  330193 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1018 09:18:00.742706  330193 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1018 09:18:00.746993  330193 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:18:00.758274  330193 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:18:00.846197  330193 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:18:00.874953  330193 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637 for IP: 192.168.103.2
	I1018 09:18:00.874980  330193 certs.go:195] generating shared ca certs ...
	I1018 09:18:00.875000  330193 certs.go:227] acquiring lock for ca certs: {Name:mk550b60d986fbbdf7b5e0015c56234b739f3162 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:18:00.875152  330193 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-5897/.minikube/ca.key
	I1018 09:18:00.875197  330193 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.key
	I1018 09:18:00.875207  330193 certs.go:257] generating profile certs ...
	I1018 09:18:00.875295  330193 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/client.key
	I1018 09:18:00.875391  330193 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/apiserver.key.d9d366ba
	I1018 09:18:00.875439  330193 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/proxy-client.key
	I1018 09:18:00.875557  330193 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/9394.pem (1338 bytes)
	W1018 09:18:00.875586  330193 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-5897/.minikube/certs/9394_empty.pem, impossibly tiny 0 bytes
	I1018 09:18:00.875596  330193 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 09:18:00.875619  330193 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:18:00.875641  330193 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:18:00.875661  330193 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem (1675 bytes)
	I1018 09:18:00.875704  330193 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem (1708 bytes)
	I1018 09:18:00.876245  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:18:00.896645  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 09:18:00.916475  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:18:00.937413  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 09:18:00.962164  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1018 09:18:00.982149  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 09:18:01.001065  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:18:01.021602  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 09:18:01.041260  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:18:01.060553  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/certs/9394.pem --> /usr/share/ca-certificates/9394.pem (1338 bytes)
	I1018 09:18:01.080521  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem --> /usr/share/ca-certificates/93942.pem (1708 bytes)
	I1018 09:18:01.099406  330193 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:18:01.112902  330193 ssh_runner.go:195] Run: openssl version
	I1018 09:18:01.119558  330193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9394.pem && ln -fs /usr/share/ca-certificates/9394.pem /etc/ssl/certs/9394.pem"
	I1018 09:18:01.128761  330193 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9394.pem
	I1018 09:18:01.133075  330193 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 08:35 /usr/share/ca-certificates/9394.pem
	I1018 09:18:01.133130  330193 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9394.pem
	I1018 09:18:01.169581  330193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9394.pem /etc/ssl/certs/51391683.0"
	I1018 09:18:01.178326  330193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93942.pem && ln -fs /usr/share/ca-certificates/93942.pem /etc/ssl/certs/93942.pem"
	I1018 09:18:01.187653  330193 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93942.pem
	I1018 09:18:01.191858  330193 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 08:35 /usr/share/ca-certificates/93942.pem
	I1018 09:18:01.191912  330193 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93942.pem
	I1018 09:18:01.227900  330193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93942.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:18:01.236865  330193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:18:01.245974  330193 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:18:01.250554  330193 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:18:01.250615  330193 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:18:01.285905  330193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:18:01.295059  330193 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:18:01.299170  330193 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 09:18:01.334401  330193 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 09:18:01.369411  330193 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 09:18:01.417245  330193 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 09:18:01.463956  330193 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 09:18:01.519260  330193 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 09:18:01.564643  330193 kubeadm.go:400] StartCluster: {Name:newest-cni-444637 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-444637 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:18:01.564725  330193 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:18:01.564799  330193 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:18:01.596025  330193 cri.go:89] found id: "014aa61b2a700319893c6b11615bd85925597f0738e1bf960657bba99a7ac5ea"
	I1018 09:18:01.596053  330193 cri.go:89] found id: "b91ae2df424fdafe037b2eea7a39a37f80929e5ab4c76c1169ce7ba3b9a4bdbd"
	I1018 09:18:01.596059  330193 cri.go:89] found id: "49cf2e65f5a6801e31db940684d60041512ed73bbf34778abdfd8025afc8b25b"
	I1018 09:18:01.596064  330193 cri.go:89] found id: "390882244d27208c7b2d7d0538a0ff970ed197d0a63b391f3e1c81bd7b8255df"
	I1018 09:18:01.596069  330193 cri.go:89] found id: ""
	I1018 09:18:01.596114  330193 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 09:18:01.609602  330193 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:18:01Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:18:01.609687  330193 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:18:01.619278  330193 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 09:18:01.619297  330193 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 09:18:01.619376  330193 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 09:18:01.628525  330193 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 09:18:01.629710  330193 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-444637" does not appear in /home/jenkins/minikube-integration/21767-5897/kubeconfig
	I1018 09:18:01.630508  330193 kubeconfig.go:62] /home/jenkins/minikube-integration/21767-5897/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-444637" cluster setting kubeconfig missing "newest-cni-444637" context setting]
	I1018 09:18:01.631708  330193 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/kubeconfig: {Name:mkbdf22e0f6c3f9f36e4cde3352b620d43ace448 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:18:01.633868  330193 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 09:18:01.643225  330193 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.103.2
	I1018 09:18:01.643268  330193 kubeadm.go:601] duration metric: took 23.964839ms to restartPrimaryControlPlane
	I1018 09:18:01.643282  330193 kubeadm.go:402] duration metric: took 78.647978ms to StartCluster
	I1018 09:18:01.643303  330193 settings.go:142] acquiring lock: {Name:mk177870d6cf7000f95346d8b9c104ade730278a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:18:01.643398  330193 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-5897/kubeconfig
	I1018 09:18:01.645409  330193 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/kubeconfig: {Name:mkbdf22e0f6c3f9f36e4cde3352b620d43ace448 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:18:01.645688  330193 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:18:01.645769  330193 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 09:18:01.645862  330193 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-444637"
	I1018 09:18:01.645882  330193 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-444637"
	W1018 09:18:01.645893  330193 addons.go:247] addon storage-provisioner should already be in state true
	I1018 09:18:01.645893  330193 addons.go:69] Setting dashboard=true in profile "newest-cni-444637"
	I1018 09:18:01.645921  330193 host.go:66] Checking if "newest-cni-444637" exists ...
	I1018 09:18:01.645934  330193 addons.go:238] Setting addon dashboard=true in "newest-cni-444637"
	W1018 09:18:01.645945  330193 addons.go:247] addon dashboard should already be in state true
	I1018 09:18:01.645945  330193 config.go:182] Loaded profile config "newest-cni-444637": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:18:01.645948  330193 addons.go:69] Setting default-storageclass=true in profile "newest-cni-444637"
	I1018 09:18:01.645973  330193 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-444637"
	I1018 09:18:01.645980  330193 host.go:66] Checking if "newest-cni-444637" exists ...
	I1018 09:18:01.646303  330193 cli_runner.go:164] Run: docker container inspect newest-cni-444637 --format={{.State.Status}}
	I1018 09:18:01.646463  330193 cli_runner.go:164] Run: docker container inspect newest-cni-444637 --format={{.State.Status}}
	I1018 09:18:01.646481  330193 cli_runner.go:164] Run: docker container inspect newest-cni-444637 --format={{.State.Status}}
	I1018 09:18:01.647698  330193 out.go:179] * Verifying Kubernetes components...
	I1018 09:18:01.649210  330193 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:18:01.673812  330193 addons.go:238] Setting addon default-storageclass=true in "newest-cni-444637"
	W1018 09:18:01.673837  330193 addons.go:247] addon default-storageclass should already be in state true
	I1018 09:18:01.673877  330193 host.go:66] Checking if "newest-cni-444637" exists ...
	I1018 09:18:01.674375  330193 cli_runner.go:164] Run: docker container inspect newest-cni-444637 --format={{.State.Status}}
	I1018 09:18:01.674516  330193 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:18:01.678901  330193 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:18:01.678924  330193 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 09:18:01.678985  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:18:01.679140  330193 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1018 09:18:01.680475  330193 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1018 09:18:01.681672  330193 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1018 09:18:01.681729  330193 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1018 09:18:01.681827  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:18:01.707736  330193 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 09:18:01.707766  330193 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 09:18:01.707826  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:18:01.713270  330193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/newest-cni-444637/id_rsa Username:docker}
	I1018 09:18:01.719016  330193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/newest-cni-444637/id_rsa Username:docker}
	I1018 09:18:01.734187  330193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/newest-cni-444637/id_rsa Username:docker}
	I1018 09:18:01.812631  330193 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:18:01.828229  330193 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:18:01.828317  330193 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:18:01.829858  330193 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:18:01.835854  330193 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1018 09:18:01.835874  330193 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1018 09:18:01.845491  330193 api_server.go:72] duration metric: took 199.769202ms to wait for apiserver process to appear ...
	I1018 09:18:01.845522  330193 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:18:01.845544  330193 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1018 09:18:01.852363  330193 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 09:18:01.854253  330193 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1018 09:18:01.854275  330193 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1018 09:18:01.872324  330193 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1018 09:18:01.872363  330193 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1018 09:18:01.891549  330193 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1018 09:18:01.891576  330193 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1018 09:18:01.910545  330193 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1018 09:18:01.910574  330193 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1018 09:18:01.928312  330193 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1018 09:18:01.928337  330193 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1018 09:18:01.942869  330193 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1018 09:18:01.942897  330193 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1018 09:18:01.957264  330193 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1018 09:18:01.957287  330193 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1018 09:18:01.971834  330193 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 09:18:01.971871  330193 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1018 09:18:01.988808  330193 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 09:18:03.360064  330193 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1018 09:18:03.360099  330193 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1018 09:18:03.360117  330193 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1018 09:18:03.416525  330193 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W1018 09:18:03.416558  330193 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I1018 09:18:03.845768  330193 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1018 09:18:03.850882  330193 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 09:18:03.850913  330193 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 09:18:03.925688  330193 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.095784279s)
	I1018 09:18:03.925778  330193 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.073377378s)
	I1018 09:18:03.925913  330193 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.937061029s)
	I1018 09:18:03.929127  330193 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-444637 addons enable metrics-server
	
	I1018 09:18:03.937380  330193 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1018 09:18:01.035250  324191 pod_ready.go:104] pod "coredns-66bc5c9577-bpcsk" is not "Ready", error: <nil>
	W1018 09:18:03.035670  324191 pod_ready.go:104] pod "coredns-66bc5c9577-bpcsk" is not "Ready", error: <nil>
	I1018 09:18:03.938934  330193 addons.go:514] duration metric: took 2.293172614s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1018 09:18:04.346493  330193 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1018 09:18:04.351148  330193 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 09:18:04.351178  330193 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 09:18:04.845878  330193 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1018 09:18:04.850252  330193 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1018 09:18:04.851396  330193 api_server.go:141] control plane version: v1.34.1
	I1018 09:18:04.851430  330193 api_server.go:131] duration metric: took 3.005900151s to wait for apiserver health ...
	I1018 09:18:04.851440  330193 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:18:04.855053  330193 system_pods.go:59] 8 kube-system pods found
	I1018 09:18:04.855092  330193 system_pods.go:61] "coredns-66bc5c9577-gc5dd" [7fab8a8d-bdb4-47d4-bf7d-d03341018666] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 09:18:04.855100  330193 system_pods.go:61] "etcd-newest-cni-444637" [b54d61ad-b52d-4343-ba3a-a64b03934319] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 09:18:04.855111  330193 system_pods.go:61] "kindnet-qmlcq" [2c82849a-5511-43a1-a300-a7f46df288ec] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1018 09:18:04.855117  330193 system_pods.go:61] "kube-apiserver-newest-cni-444637" [a9136c1f-8962-45f7-b005-05bd3f856403] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 09:18:04.855124  330193 system_pods.go:61] "kube-controller-manager-newest-cni-444637" [b8d840d7-04c3-495c-aafa-cc8a06e58f06] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 09:18:04.855130  330193 system_pods.go:61] "kube-proxy-hbkn5" [d70417da-43f2-4d8c-a088-07cea5225c34] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1018 09:18:04.855138  330193 system_pods.go:61] "kube-scheduler-newest-cni-444637" [175527c5-4260-4e39-be83-4c36417f3cbe] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 09:18:04.855142  330193 system_pods.go:61] "storage-provisioner" [b0974a78-b6ad-45c3-8241-86f8bb7bc65b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 09:18:04.855151  330193 system_pods.go:74] duration metric: took 3.706424ms to wait for pod list to return data ...
	I1018 09:18:04.855162  330193 default_sa.go:34] waiting for default service account to be created ...
	I1018 09:18:04.857785  330193 default_sa.go:45] found service account: "default"
	I1018 09:18:04.857804  330193 default_sa.go:55] duration metric: took 2.636173ms for default service account to be created ...
	I1018 09:18:04.857817  330193 kubeadm.go:586] duration metric: took 3.212102689s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 09:18:04.857837  330193 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:18:04.860449  330193 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 09:18:04.860472  330193 node_conditions.go:123] node cpu capacity is 8
	I1018 09:18:04.860486  330193 node_conditions.go:105] duration metric: took 2.642504ms to run NodePressure ...
	I1018 09:18:04.860498  330193 start.go:241] waiting for startup goroutines ...
	I1018 09:18:04.860504  330193 start.go:246] waiting for cluster config update ...
	I1018 09:18:04.860514  330193 start.go:255] writing updated cluster config ...
	I1018 09:18:04.860806  330193 ssh_runner.go:195] Run: rm -f paused
	I1018 09:18:04.910604  330193 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 09:18:04.913879  330193 out.go:179] * Done! kubectl is now configured to use "newest-cni-444637" cluster and "default" namespace by default
	W1018 09:18:05.535906  324191 pod_ready.go:104] pod "coredns-66bc5c9577-bpcsk" is not "Ready", error: <nil>
	W1018 09:18:08.034961  324191 pod_ready.go:104] pod "coredns-66bc5c9577-bpcsk" is not "Ready", error: <nil>
	W1018 09:18:10.535644  324191 pod_ready.go:104] pod "coredns-66bc5c9577-bpcsk" is not "Ready", error: <nil>
	W1018 09:18:13.036211  324191 pod_ready.go:104] pod "coredns-66bc5c9577-bpcsk" is not "Ready", error: <nil>
	W1018 09:18:15.534175  324191 pod_ready.go:104] pod "coredns-66bc5c9577-bpcsk" is not "Ready", error: <nil>
	W1018 09:18:17.535056  324191 pod_ready.go:104] pod "coredns-66bc5c9577-bpcsk" is not "Ready", error: <nil>
	I1018 09:18:19.034808  324191 pod_ready.go:94] pod "coredns-66bc5c9577-bpcsk" is "Ready"
	I1018 09:18:19.034833  324191 pod_ready.go:86] duration metric: took 38.506130218s for pod "coredns-66bc5c9577-bpcsk" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:18:19.038302  324191 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-986220" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:18:19.044158  324191 pod_ready.go:94] pod "etcd-default-k8s-diff-port-986220" is "Ready"
	I1018 09:18:19.044183  324191 pod_ready.go:86] duration metric: took 5.852883ms for pod "etcd-default-k8s-diff-port-986220" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:18:19.047009  324191 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-986220" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:18:19.052078  324191 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-986220" is "Ready"
	I1018 09:18:19.052102  324191 pod_ready.go:86] duration metric: took 5.068886ms for pod "kube-apiserver-default-k8s-diff-port-986220" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:18:19.054584  324191 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-986220" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:18:19.233033  324191 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-986220" is "Ready"
	I1018 09:18:19.233061  324191 pod_ready.go:86] duration metric: took 178.456789ms for pod "kube-controller-manager-default-k8s-diff-port-986220" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:18:19.433931  324191 pod_ready.go:83] waiting for pod "kube-proxy-vvtpl" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:18:19.832592  324191 pod_ready.go:94] pod "kube-proxy-vvtpl" is "Ready"
	I1018 09:18:19.832619  324191 pod_ready.go:86] duration metric: took 398.658534ms for pod "kube-proxy-vvtpl" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:18:20.033393  324191 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-986220" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:18:20.432734  324191 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-986220" is "Ready"
	I1018 09:18:20.432777  324191 pod_ready.go:86] duration metric: took 399.356966ms for pod "kube-scheduler-default-k8s-diff-port-986220" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:18:20.432793  324191 pod_ready.go:40] duration metric: took 39.908263478s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:18:20.482249  324191 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 09:18:20.484082  324191 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-986220" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 09:17:50 default-k8s-diff-port-986220 crio[557]: time="2025-10-18T09:17:50.37660541Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 09:17:50 default-k8s-diff-port-986220 crio[557]: time="2025-10-18T09:17:50.594915866Z" level=info msg="Removing container: 648ba523e85c42ddd3f874e5a62114aedc60dc27b5aac43dc0f4df0d049c8b51" id=3d29b266-28d4-4b19-94c4-de12c061ec2a name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 09:17:50 default-k8s-diff-port-986220 crio[557]: time="2025-10-18T09:17:50.604576847Z" level=info msg="Removed container 648ba523e85c42ddd3f874e5a62114aedc60dc27b5aac43dc0f4df0d049c8b51: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m92mk/dashboard-metrics-scraper" id=3d29b266-28d4-4b19-94c4-de12c061ec2a name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 09:18:10 default-k8s-diff-port-986220 crio[557]: time="2025-10-18T09:18:10.496618519Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=3a4524f2-9e1f-48a7-8bba-eb8efc905a21 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:18:10 default-k8s-diff-port-986220 crio[557]: time="2025-10-18T09:18:10.49770216Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=3fd14224-5a7d-4c8a-b6b5-6e2e6d159809 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:18:10 default-k8s-diff-port-986220 crio[557]: time="2025-10-18T09:18:10.498769706Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m92mk/dashboard-metrics-scraper" id=8328aacf-fb1b-4c48-a97d-c6ffb0293bdf name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:18:10 default-k8s-diff-port-986220 crio[557]: time="2025-10-18T09:18:10.499056317Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:18:10 default-k8s-diff-port-986220 crio[557]: time="2025-10-18T09:18:10.506274652Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:18:10 default-k8s-diff-port-986220 crio[557]: time="2025-10-18T09:18:10.507122472Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:18:10 default-k8s-diff-port-986220 crio[557]: time="2025-10-18T09:18:10.539442507Z" level=info msg="Created container 327aee2aa78ae38951a7e22143f154bb2a0f00c5ea96263259918fb6abc5b2db: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m92mk/dashboard-metrics-scraper" id=8328aacf-fb1b-4c48-a97d-c6ffb0293bdf name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:18:10 default-k8s-diff-port-986220 crio[557]: time="2025-10-18T09:18:10.540188009Z" level=info msg="Starting container: 327aee2aa78ae38951a7e22143f154bb2a0f00c5ea96263259918fb6abc5b2db" id=71d8f86f-8d7d-4135-8ecb-071f2805bda3 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:18:10 default-k8s-diff-port-986220 crio[557]: time="2025-10-18T09:18:10.542628456Z" level=info msg="Started container" PID=1761 containerID=327aee2aa78ae38951a7e22143f154bb2a0f00c5ea96263259918fb6abc5b2db description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m92mk/dashboard-metrics-scraper id=71d8f86f-8d7d-4135-8ecb-071f2805bda3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1c8591e7cd26473a6f722f35335c41259825ba74a758f9946a72a1b36b6a7ff3
	Oct 18 09:18:10 default-k8s-diff-port-986220 crio[557]: time="2025-10-18T09:18:10.650916263Z" level=info msg="Removing container: ed0214dde9b6c3a395be05c33ba7e949e4400f0bffbec160bb16ac9d6ef8dcb1" id=ce14e8e9-7232-4c15-986b-584751743afb name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 09:18:10 default-k8s-diff-port-986220 crio[557]: time="2025-10-18T09:18:10.651960242Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=6abe7531-5087-4bae-8347-e3966bf3e297 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:18:10 default-k8s-diff-port-986220 crio[557]: time="2025-10-18T09:18:10.652887524Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=0dfb580b-7ba8-4e56-9ff7-36d7ced157c8 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:18:10 default-k8s-diff-port-986220 crio[557]: time="2025-10-18T09:18:10.654148565Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=22a7fe9d-1253-4991-8296-725d4736d557 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:18:10 default-k8s-diff-port-986220 crio[557]: time="2025-10-18T09:18:10.654401697Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:18:10 default-k8s-diff-port-986220 crio[557]: time="2025-10-18T09:18:10.659845136Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:18:10 default-k8s-diff-port-986220 crio[557]: time="2025-10-18T09:18:10.660082125Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/1b1c22127fb2b778be7f78d3b8dea141a3085239500ffff1acde8567eb5a0457/merged/etc/passwd: no such file or directory"
	Oct 18 09:18:10 default-k8s-diff-port-986220 crio[557]: time="2025-10-18T09:18:10.660120642Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/1b1c22127fb2b778be7f78d3b8dea141a3085239500ffff1acde8567eb5a0457/merged/etc/group: no such file or directory"
	Oct 18 09:18:10 default-k8s-diff-port-986220 crio[557]: time="2025-10-18T09:18:10.661178351Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:18:10 default-k8s-diff-port-986220 crio[557]: time="2025-10-18T09:18:10.663421932Z" level=info msg="Removed container ed0214dde9b6c3a395be05c33ba7e949e4400f0bffbec160bb16ac9d6ef8dcb1: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m92mk/dashboard-metrics-scraper" id=ce14e8e9-7232-4c15-986b-584751743afb name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 09:18:10 default-k8s-diff-port-986220 crio[557]: time="2025-10-18T09:18:10.692020899Z" level=info msg="Created container 49ce691fa7cdd9b79bec964c1afdc4a8c154310a7c6ea44e93cd0d76e7d23447: kube-system/storage-provisioner/storage-provisioner" id=22a7fe9d-1253-4991-8296-725d4736d557 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:18:10 default-k8s-diff-port-986220 crio[557]: time="2025-10-18T09:18:10.692739945Z" level=info msg="Starting container: 49ce691fa7cdd9b79bec964c1afdc4a8c154310a7c6ea44e93cd0d76e7d23447" id=388f2115-1f1f-402d-b267-1f668160e9f7 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:18:10 default-k8s-diff-port-986220 crio[557]: time="2025-10-18T09:18:10.694910095Z" level=info msg="Started container" PID=1771 containerID=49ce691fa7cdd9b79bec964c1afdc4a8c154310a7c6ea44e93cd0d76e7d23447 description=kube-system/storage-provisioner/storage-provisioner id=388f2115-1f1f-402d-b267-1f668160e9f7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=80308f3b6b6914952b71bbb297a4c4a8e7bb1ed4c5531ad3a601a8456f2c77af
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	49ce691fa7cdd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           24 seconds ago      Running             storage-provisioner         1                   80308f3b6b691       storage-provisioner                                    kube-system
	327aee2aa78ae       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           24 seconds ago      Exited              dashboard-metrics-scraper   2                   1c8591e7cd264       dashboard-metrics-scraper-6ffb444bf9-m92mk             kubernetes-dashboard
	dd2f1f47e902c       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   48 seconds ago      Running             kubernetes-dashboard        0                   06db64c6d60fe       kubernetes-dashboard-855c9754f9-gwp9p                  kubernetes-dashboard
	514c449ade6a7       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           55 seconds ago      Running             coredns                     0                   e4c12ee041ff0       coredns-66bc5c9577-bpcsk                               kube-system
	0b8ceb3576fbb       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           55 seconds ago      Running             busybox                     1                   9ba81f26a4853       busybox                                                default
	4d5cc19ffee18       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           55 seconds ago      Exited              storage-provisioner         0                   80308f3b6b691       storage-provisioner                                    kube-system
	3e2529aa2dd60       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           55 seconds ago      Running             kindnet-cni                 0                   3cc2a1661e9b0       kindnet-cj6bv                                          kube-system
	efcf153f5528d       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           55 seconds ago      Running             kube-proxy                  0                   8ffb5ff4bab9c       kube-proxy-vvtpl                                       kube-system
	8956123c13137       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           57 seconds ago      Running             kube-controller-manager     0                   21d4c7fb2ed98       kube-controller-manager-default-k8s-diff-port-986220   kube-system
	8d1ab9fe3eb84       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           57 seconds ago      Running             kube-apiserver              0                   4f59887585f9f       kube-apiserver-default-k8s-diff-port-986220            kube-system
	1dc67601595ac       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           57 seconds ago      Running             kube-scheduler              0                   f42d43919f969       kube-scheduler-default-k8s-diff-port-986220            kube-system
	bad27ff83c636       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           57 seconds ago      Running             etcd                        0                   41917a5894ecb       etcd-default-k8s-diff-port-986220                      kube-system
	
	
	==> coredns [514c449ade6a78cd215a5ddfcf373f35a48b107fc90ec5014b5ea1fcf64cfc79] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36493 - 59691 "HINFO IN 4214345634566612825.7215447916724245711. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.114176821s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-986220
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-986220
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a39cecdc22b5fb611b15c7501c7459c3b4d2820
	                    minikube.k8s.io/name=default-k8s-diff-port-986220
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T09_16_42_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 09:16:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-986220
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 09:18:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 09:18:30 +0000   Sat, 18 Oct 2025 09:16:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 09:18:30 +0000   Sat, 18 Oct 2025 09:16:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 09:18:30 +0000   Sat, 18 Oct 2025 09:16:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 09:18:30 +0000   Sat, 18 Oct 2025 09:16:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    default-k8s-diff-port-986220
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                f86ae77e-f46d-47da-846c-c937a0a7701a
	  Boot ID:                    e8d7ef1f-87bb-488c-8381-e18fe85b484f
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-bpcsk                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     107s
	  kube-system                 etcd-default-k8s-diff-port-986220                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         113s
	  kube-system                 kindnet-cj6bv                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-default-k8s-diff-port-986220             250m (3%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-986220    200m (2%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-proxy-vvtpl                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-default-k8s-diff-port-986220             100m (1%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-m92mk              0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-gwp9p                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 107s               kube-proxy       
	  Normal  Starting                 54s                kube-proxy       
	  Normal  NodeHasSufficientMemory  119s (x8 over 2m)  kubelet          Node default-k8s-diff-port-986220 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    119s (x8 over 2m)  kubelet          Node default-k8s-diff-port-986220 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     119s (x8 over 2m)  kubelet          Node default-k8s-diff-port-986220 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     113s               kubelet          Node default-k8s-diff-port-986220 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  113s               kubelet          Node default-k8s-diff-port-986220 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    113s               kubelet          Node default-k8s-diff-port-986220 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 113s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           109s               node-controller  Node default-k8s-diff-port-986220 event: Registered Node default-k8s-diff-port-986220 in Controller
	  Normal  NodeReady                96s                kubelet          Node default-k8s-diff-port-986220 status is now: NodeReady
	  Normal  Starting                 58s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s (x8 over 58s)  kubelet          Node default-k8s-diff-port-986220 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x8 over 58s)  kubelet          Node default-k8s-diff-port-986220 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x8 over 58s)  kubelet          Node default-k8s-diff-port-986220 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           52s                node-controller  Node default-k8s-diff-port-986220 event: Registered Node default-k8s-diff-port-986220 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3e 73 e0 1e 4f 5c 08 06
	[  +0.001176] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9e 01 6a be c1 ed 08 06
	[  +1.096145] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 92 07 d0 c5 bc 08 06
	[  +0.000393] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff de 8d 0a a3 cc 78 08 06
	[ +17.591772] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 8a 16 36 e8 43 c0 08 06
	[  +0.000413] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3e 73 e0 1e 4f 5c 08 06
	[ +11.820741] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 36 5b 8e 46 ea 47 08 06
	[Oct18 09:15] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 28 86 11 d4 e9 08 06
	[  +0.032974] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 76 2d 83 26 2e 28 08 06
	[  +4.435535] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 22 e2 07 5a 3b 4a 08 06
	[  +0.000382] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 36 5b 8e 46 ea 47 08 06
	[ +43.809014] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 22 6f 4b 2b 7f 46 08 06
	[  +0.000367] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 62 28 86 11 d4 e9 08 06
	
	
	==> etcd [bad27ff83c63687be534ccd3f079002f13a4d8cf081095fd1e212a53f3010fbf] <==
	{"level":"warn","ts":"2025-10-18T09:17:38.310518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:38.326277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:38.335964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:38.345317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:38.352490Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:38.361029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:38.369133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:38.377163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:38.387201Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:38.396827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:38.405733Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:38.414387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:38.423709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:38.432277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:38.452067Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:38.460714Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:38.472996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:38.482595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:38.492484Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:38.509006Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:38.516015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:38.532330Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:38.541563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:38.550863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:38.606924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54504","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:18:35 up  1:01,  0 user,  load average: 3.29, 3.57, 2.53
	Linux default-k8s-diff-port-986220 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3e2529aa2dd60af7f9c954b73b314b5ec999e7f5a4e0b8dd5a9e4f8b4143a321] <==
	I1018 09:17:40.148446       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 09:17:40.148793       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1018 09:17:40.149053       1 main.go:148] setting mtu 1500 for CNI 
	I1018 09:17:40.149070       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 09:17:40.149096       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T09:17:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 09:17:40.353944       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 09:17:40.353989       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 09:17:40.354004       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 09:17:40.386732       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 09:17:40.686791       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 09:17:40.686835       1 metrics.go:72] Registering metrics
	I1018 09:17:40.686971       1 controller.go:711] "Syncing nftables rules"
	I1018 09:17:50.352810       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1018 09:17:50.352855       1 main.go:301] handling current node
	I1018 09:18:00.360513       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1018 09:18:00.360545       1 main.go:301] handling current node
	I1018 09:18:10.353758       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1018 09:18:10.353798       1 main.go:301] handling current node
	I1018 09:18:20.353484       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1018 09:18:20.353534       1 main.go:301] handling current node
	I1018 09:18:30.353785       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1018 09:18:30.353820       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8d1ab9fe3eb84ef483a99bbfe79d01dfa34dfdff518ca313e3c2299c6723b35e] <==
	I1018 09:17:39.185256       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1018 09:17:39.185322       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 09:17:39.185403       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1018 09:17:39.185438       1 aggregator.go:171] initial CRD sync complete...
	I1018 09:17:39.185449       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 09:17:39.185456       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 09:17:39.185470       1 cache.go:39] Caches are synced for autoregister controller
	I1018 09:17:39.185542       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1018 09:17:39.186043       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1018 09:17:39.189880       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 09:17:39.197813       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1018 09:17:39.219801       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1018 09:17:39.219917       1 policy_source.go:240] refreshing policies
	I1018 09:17:39.236012       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 09:17:39.590292       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 09:17:39.642070       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 09:17:39.654077       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 09:17:39.695440       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 09:17:39.705324       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 09:17:39.764270       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.6.150"}
	I1018 09:17:39.779745       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.79.212"}
	I1018 09:17:40.091330       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 09:17:42.586953       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 09:17:42.943107       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 09:17:42.988673       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [8956123c1313708cc585f6ee981938531d1fde0ef837a5cdbf5b02ab1fb0c549] <==
	I1018 09:17:42.533562       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 09:17:42.533579       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 09:17:42.533628       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 09:17:42.534139       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 09:17:42.534196       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 09:17:42.534465       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 09:17:42.534515       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 09:17:42.534543       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 09:17:42.534878       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 09:17:42.536015       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 09:17:42.536186       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:17:42.538227       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 09:17:42.538367       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 09:17:42.538488       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-986220"
	I1018 09:17:42.538545       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1018 09:17:42.539317       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:17:42.541583       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1018 09:17:42.541601       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 09:17:42.541633       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 09:17:42.541669       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 09:17:42.542776       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 09:17:42.546160       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 09:17:42.548182       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 09:17:42.550499       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 09:17:42.562824       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [efcf153f5528d91cf81fb7b54240b482e4822aa80a11aa28014d0e8723503d50] <==
	I1018 09:17:39.923303       1 server_linux.go:53] "Using iptables proxy"
	I1018 09:17:39.982912       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 09:17:40.083338       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 09:17:40.083509       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1018 09:17:40.083751       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 09:17:40.109977       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 09:17:40.110061       1 server_linux.go:132] "Using iptables Proxier"
	I1018 09:17:40.117018       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 09:17:40.117444       1 server.go:527] "Version info" version="v1.34.1"
	I1018 09:17:40.117822       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:17:40.121267       1 config.go:200] "Starting service config controller"
	I1018 09:17:40.122834       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 09:17:40.122413       1 config.go:106] "Starting endpoint slice config controller"
	I1018 09:17:40.122932       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 09:17:40.122425       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 09:17:40.122945       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 09:17:40.121937       1 config.go:309] "Starting node config controller"
	I1018 09:17:40.122952       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 09:17:40.122986       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 09:17:40.223272       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 09:17:40.223295       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 09:17:40.223326       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [1dc67601595acad3b95b404bf690768d89426dc4a4256db06ee931235af514af] <==
	I1018 09:17:37.264655       1 serving.go:386] Generated self-signed cert in-memory
	I1018 09:17:39.549049       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 09:17:39.549089       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:17:39.555435       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1018 09:17:39.555769       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1018 09:17:39.555634       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 09:17:39.555866       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 09:17:39.555591       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:17:39.556403       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:17:39.556909       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 09:17:39.557001       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 09:17:39.656132       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1018 09:17:39.656979       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 09:17:39.660112       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 09:17:43 default-k8s-diff-port-986220 kubelet[714]: I1018 09:17:43.270781     714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdszw\" (UniqueName: \"kubernetes.io/projected/c10f0845-9777-48ac-b709-3775518d787b-kube-api-access-kdszw\") pod \"kubernetes-dashboard-855c9754f9-gwp9p\" (UID: \"c10f0845-9777-48ac-b709-3775518d787b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-gwp9p"
	Oct 18 09:17:47 default-k8s-diff-port-986220 kubelet[714]: I1018 09:17:47.596247     714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-gwp9p" podStartSLOduration=1.37645028 podStartE2EDuration="4.596219607s" podCreationTimestamp="2025-10-18 09:17:43 +0000 UTC" firstStartedPulling="2025-10-18 09:17:43.436776339 +0000 UTC m=+7.033609121" lastFinishedPulling="2025-10-18 09:17:46.656545659 +0000 UTC m=+10.253378448" observedRunningTime="2025-10-18 09:17:47.595753413 +0000 UTC m=+11.192586205" watchObservedRunningTime="2025-10-18 09:17:47.596219607 +0000 UTC m=+11.193052400"
	Oct 18 09:17:48 default-k8s-diff-port-986220 kubelet[714]: I1018 09:17:48.751248     714 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 18 09:17:49 default-k8s-diff-port-986220 kubelet[714]: I1018 09:17:49.588914     714 scope.go:117] "RemoveContainer" containerID="648ba523e85c42ddd3f874e5a62114aedc60dc27b5aac43dc0f4df0d049c8b51"
	Oct 18 09:17:50 default-k8s-diff-port-986220 kubelet[714]: I1018 09:17:50.593387     714 scope.go:117] "RemoveContainer" containerID="648ba523e85c42ddd3f874e5a62114aedc60dc27b5aac43dc0f4df0d049c8b51"
	Oct 18 09:17:50 default-k8s-diff-port-986220 kubelet[714]: I1018 09:17:50.593787     714 scope.go:117] "RemoveContainer" containerID="ed0214dde9b6c3a395be05c33ba7e949e4400f0bffbec160bb16ac9d6ef8dcb1"
	Oct 18 09:17:50 default-k8s-diff-port-986220 kubelet[714]: E1018 09:17:50.593949     714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-m92mk_kubernetes-dashboard(73227830-04e5-4436-88a9-fbfc12df2b00)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m92mk" podUID="73227830-04e5-4436-88a9-fbfc12df2b00"
	Oct 18 09:17:51 default-k8s-diff-port-986220 kubelet[714]: I1018 09:17:51.597992     714 scope.go:117] "RemoveContainer" containerID="ed0214dde9b6c3a395be05c33ba7e949e4400f0bffbec160bb16ac9d6ef8dcb1"
	Oct 18 09:17:51 default-k8s-diff-port-986220 kubelet[714]: E1018 09:17:51.598150     714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-m92mk_kubernetes-dashboard(73227830-04e5-4436-88a9-fbfc12df2b00)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m92mk" podUID="73227830-04e5-4436-88a9-fbfc12df2b00"
	Oct 18 09:17:55 default-k8s-diff-port-986220 kubelet[714]: I1018 09:17:55.527819     714 scope.go:117] "RemoveContainer" containerID="ed0214dde9b6c3a395be05c33ba7e949e4400f0bffbec160bb16ac9d6ef8dcb1"
	Oct 18 09:17:55 default-k8s-diff-port-986220 kubelet[714]: E1018 09:17:55.528134     714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-m92mk_kubernetes-dashboard(73227830-04e5-4436-88a9-fbfc12df2b00)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m92mk" podUID="73227830-04e5-4436-88a9-fbfc12df2b00"
	Oct 18 09:18:10 default-k8s-diff-port-986220 kubelet[714]: I1018 09:18:10.496035     714 scope.go:117] "RemoveContainer" containerID="ed0214dde9b6c3a395be05c33ba7e949e4400f0bffbec160bb16ac9d6ef8dcb1"
	Oct 18 09:18:10 default-k8s-diff-port-986220 kubelet[714]: I1018 09:18:10.649351     714 scope.go:117] "RemoveContainer" containerID="ed0214dde9b6c3a395be05c33ba7e949e4400f0bffbec160bb16ac9d6ef8dcb1"
	Oct 18 09:18:10 default-k8s-diff-port-986220 kubelet[714]: I1018 09:18:10.649594     714 scope.go:117] "RemoveContainer" containerID="327aee2aa78ae38951a7e22143f154bb2a0f00c5ea96263259918fb6abc5b2db"
	Oct 18 09:18:10 default-k8s-diff-port-986220 kubelet[714]: E1018 09:18:10.649817     714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-m92mk_kubernetes-dashboard(73227830-04e5-4436-88a9-fbfc12df2b00)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m92mk" podUID="73227830-04e5-4436-88a9-fbfc12df2b00"
	Oct 18 09:18:10 default-k8s-diff-port-986220 kubelet[714]: I1018 09:18:10.651516     714 scope.go:117] "RemoveContainer" containerID="4d5cc19ffee186783c97a10c5f2a7ef492399eb6f8acbfa30d889652dbfdcd2b"
	Oct 18 09:18:15 default-k8s-diff-port-986220 kubelet[714]: I1018 09:18:15.527654     714 scope.go:117] "RemoveContainer" containerID="327aee2aa78ae38951a7e22143f154bb2a0f00c5ea96263259918fb6abc5b2db"
	Oct 18 09:18:15 default-k8s-diff-port-986220 kubelet[714]: E1018 09:18:15.527855     714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-m92mk_kubernetes-dashboard(73227830-04e5-4436-88a9-fbfc12df2b00)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m92mk" podUID="73227830-04e5-4436-88a9-fbfc12df2b00"
	Oct 18 09:18:27 default-k8s-diff-port-986220 kubelet[714]: I1018 09:18:27.495925     714 scope.go:117] "RemoveContainer" containerID="327aee2aa78ae38951a7e22143f154bb2a0f00c5ea96263259918fb6abc5b2db"
	Oct 18 09:18:27 default-k8s-diff-port-986220 kubelet[714]: E1018 09:18:27.496114     714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-m92mk_kubernetes-dashboard(73227830-04e5-4436-88a9-fbfc12df2b00)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m92mk" podUID="73227830-04e5-4436-88a9-fbfc12df2b00"
	Oct 18 09:18:32 default-k8s-diff-port-986220 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 09:18:32 default-k8s-diff-port-986220 kubelet[714]: I1018 09:18:32.538981     714 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 18 09:18:32 default-k8s-diff-port-986220 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 09:18:32 default-k8s-diff-port-986220 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 18 09:18:32 default-k8s-diff-port-986220 systemd[1]: kubelet.service: Consumed 1.906s CPU time.
	
	
	==> kubernetes-dashboard [dd2f1f47e902c1cbe5cb90ca529db2c31f57a6d6f5fdebcf2ed75577b59a049b] <==
	2025/10/18 09:17:46 Starting overwatch
	2025/10/18 09:17:46 Using namespace: kubernetes-dashboard
	2025/10/18 09:17:46 Using in-cluster config to connect to apiserver
	2025/10/18 09:17:46 Using secret token for csrf signing
	2025/10/18 09:17:46 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 09:17:46 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 09:17:46 Successful initial request to the apiserver, version: v1.34.1
	2025/10/18 09:17:46 Generating JWE encryption key
	2025/10/18 09:17:46 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 09:17:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 09:17:47 Initializing JWE encryption key from synchronized object
	2025/10/18 09:17:47 Creating in-cluster Sidecar client
	2025/10/18 09:17:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 09:17:47 Serving insecurely on HTTP port: 9090
	2025/10/18 09:18:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [49ce691fa7cdd9b79bec964c1afdc4a8c154310a7c6ea44e93cd0d76e7d23447] <==
	I1018 09:18:10.708505       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 09:18:10.717040       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 09:18:10.717142       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 09:18:10.719807       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:18:14.175658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:18:18.436412       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:18:22.035510       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:18:25.089296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:18:28.112290       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:18:28.117322       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 09:18:28.117503       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 09:18:28.117601       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b90e1b13-d855-40dd-8fdf-9ac19eb23314", APIVersion:"v1", ResourceVersion:"665", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-986220_64931e38-d0cf-47be-8edb-003eefbc390c became leader
	I1018 09:18:28.117661       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-986220_64931e38-d0cf-47be-8edb-003eefbc390c!
	W1018 09:18:28.120484       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:18:28.124185       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 09:18:28.218310       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-986220_64931e38-d0cf-47be-8edb-003eefbc390c!
	W1018 09:18:30.127257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:18:30.134244       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:18:32.138522       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:18:32.142936       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:18:34.146090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:18:34.150488       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [4d5cc19ffee186783c97a10c5f2a7ef492399eb6f8acbfa30d889652dbfdcd2b] <==
	I1018 09:17:39.890190       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 09:18:09.892944       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-986220 -n default-k8s-diff-port-986220
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-986220 -n default-k8s-diff-port-986220: exit status 2 (324.297726ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-986220 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-986220
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-986220:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "48881c0b9d8337dba348ebf21ea33e5939947c73c9cc7a2773507be18d3ba575",
	        "Created": "2025-10-18T09:16:19.86673265Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 324454,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T09:17:29.591929444Z",
	            "FinishedAt": "2025-10-18T09:17:27.758633883Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/48881c0b9d8337dba348ebf21ea33e5939947c73c9cc7a2773507be18d3ba575/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/48881c0b9d8337dba348ebf21ea33e5939947c73c9cc7a2773507be18d3ba575/hostname",
	        "HostsPath": "/var/lib/docker/containers/48881c0b9d8337dba348ebf21ea33e5939947c73c9cc7a2773507be18d3ba575/hosts",
	        "LogPath": "/var/lib/docker/containers/48881c0b9d8337dba348ebf21ea33e5939947c73c9cc7a2773507be18d3ba575/48881c0b9d8337dba348ebf21ea33e5939947c73c9cc7a2773507be18d3ba575-json.log",
	        "Name": "/default-k8s-diff-port-986220",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-986220:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-986220",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "48881c0b9d8337dba348ebf21ea33e5939947c73c9cc7a2773507be18d3ba575",
	                "LowerDir": "/var/lib/docker/overlay2/ebca512855d8718cc57a74f0c5a7cb78a8d4717430e6e9b0fbcfa814a3464016-init/diff:/var/lib/docker/overlay2/76f783f469ac4c930bc111d7df4bd2b3a57bdcd762971c7ce0ba7a7b959771a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ebca512855d8718cc57a74f0c5a7cb78a8d4717430e6e9b0fbcfa814a3464016/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ebca512855d8718cc57a74f0c5a7cb78a8d4717430e6e9b0fbcfa814a3464016/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ebca512855d8718cc57a74f0c5a7cb78a8d4717430e6e9b0fbcfa814a3464016/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-986220",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-986220/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-986220",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-986220",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-986220",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "576063fdfdc15bd6e7ea20ccdd827695dae890ec9309272560e5870ffda77da3",
	            "SandboxKey": "/var/run/docker/netns/576063fdfdc1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-986220": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "76:a8:ba:84:d6:0f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ef55982bb9e9da39f5725d618404d1c9094984213effce96590128a5ebc25231",
	                    "EndpointID": "c5014c4620214713400ed8439671c20f436fc0c72a973d742500dd4cd1e3ef7b",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-986220",
	                        "48881c0b9d83"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-986220 -n default-k8s-diff-port-986220
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-986220 -n default-k8s-diff-port-986220: exit status 2 (316.522392ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-986220 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-986220 logs -n 25: (1.104216463s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p newest-cni-444637 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-444637            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-986220 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-986220 │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-880603 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-880603           │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ start   │ -p embed-certs-880603 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-880603           │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ stop    │ -p default-k8s-diff-port-986220 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-986220 │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ image   │ no-preload-031066 image list --format=json                                                                                                                                                                                                    │ no-preload-031066            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ pause   │ -p no-preload-031066 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-031066            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-986220 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-986220 │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ start   │ -p default-k8s-diff-port-986220 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-986220 │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:18 UTC │
	│ delete  │ -p no-preload-031066                                                                                                                                                                                                                          │ no-preload-031066            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ delete  │ -p no-preload-031066                                                                                                                                                                                                                          │ no-preload-031066            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ addons  │ enable metrics-server -p newest-cni-444637 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-444637            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │                     │
	│ stop    │ -p newest-cni-444637 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-444637            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ addons  │ enable dashboard -p newest-cni-444637 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-444637            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ start   │ -p newest-cni-444637 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-444637            │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:18 UTC │
	│ image   │ newest-cni-444637 image list --format=json                                                                                                                                                                                                    │ newest-cni-444637            │ jenkins │ v1.37.0 │ 18 Oct 25 09:18 UTC │ 18 Oct 25 09:18 UTC │
	│ pause   │ -p newest-cni-444637 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-444637            │ jenkins │ v1.37.0 │ 18 Oct 25 09:18 UTC │                     │
	│ image   │ embed-certs-880603 image list --format=json                                                                                                                                                                                                   │ embed-certs-880603           │ jenkins │ v1.37.0 │ 18 Oct 25 09:18 UTC │ 18 Oct 25 09:18 UTC │
	│ pause   │ -p embed-certs-880603 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-880603           │ jenkins │ v1.37.0 │ 18 Oct 25 09:18 UTC │                     │
	│ delete  │ -p newest-cni-444637                                                                                                                                                                                                                          │ newest-cni-444637            │ jenkins │ v1.37.0 │ 18 Oct 25 09:18 UTC │ 18 Oct 25 09:18 UTC │
	│ delete  │ -p newest-cni-444637                                                                                                                                                                                                                          │ newest-cni-444637            │ jenkins │ v1.37.0 │ 18 Oct 25 09:18 UTC │ 18 Oct 25 09:18 UTC │
	│ delete  │ -p embed-certs-880603                                                                                                                                                                                                                         │ embed-certs-880603           │ jenkins │ v1.37.0 │ 18 Oct 25 09:18 UTC │ 18 Oct 25 09:18 UTC │
	│ delete  │ -p embed-certs-880603                                                                                                                                                                                                                         │ embed-certs-880603           │ jenkins │ v1.37.0 │ 18 Oct 25 09:18 UTC │ 18 Oct 25 09:18 UTC │
	│ image   │ default-k8s-diff-port-986220 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-986220 │ jenkins │ v1.37.0 │ 18 Oct 25 09:18 UTC │ 18 Oct 25 09:18 UTC │
	│ pause   │ -p default-k8s-diff-port-986220 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-986220 │ jenkins │ v1.37.0 │ 18 Oct 25 09:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 09:17:54
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 09:17:54.427005  330193 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:17:54.427270  330193 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:17:54.427281  330193 out.go:374] Setting ErrFile to fd 2...
	I1018 09:17:54.427287  330193 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:17:54.427525  330193 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-5897/.minikube/bin
	I1018 09:17:54.428050  330193 out.go:368] Setting JSON to false
	I1018 09:17:54.429280  330193 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3622,"bootTime":1760775452,"procs":312,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 09:17:54.429387  330193 start.go:141] virtualization: kvm guest
	I1018 09:17:54.431635  330193 out.go:179] * [newest-cni-444637] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 09:17:54.432952  330193 notify.go:220] Checking for updates...
	I1018 09:17:54.432979  330193 out.go:179]   - MINIKUBE_LOCATION=21767
	I1018 09:17:54.434488  330193 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:17:54.435897  330193 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-5897/kubeconfig
	I1018 09:17:54.437111  330193 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-5897/.minikube
	I1018 09:17:54.438264  330193 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 09:17:54.439545  330193 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:17:54.441204  330193 config.go:182] Loaded profile config "newest-cni-444637": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:17:54.441727  330193 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:17:54.467746  330193 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 09:17:54.467827  330193 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:17:54.527403  330193 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:66 SystemTime:2025-10-18 09:17:54.515566485 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:17:54.527559  330193 docker.go:318] overlay module found
	I1018 09:17:54.529436  330193 out.go:179] * Using the docker driver based on existing profile
	I1018 09:17:54.530557  330193 start.go:305] selected driver: docker
	I1018 09:17:54.530578  330193 start.go:925] validating driver "docker" against &{Name:newest-cni-444637 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-444637 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:17:54.530680  330193 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:17:54.531357  330193 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:17:54.591156  330193 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:66 SystemTime:2025-10-18 09:17:54.580755477 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:17:54.591532  330193 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 09:17:54.591566  330193 cni.go:84] Creating CNI manager for ""
	I1018 09:17:54.591617  330193 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:17:54.591683  330193 start.go:349] cluster config:
	{Name:newest-cni-444637 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-444637 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:17:54.593449  330193 out.go:179] * Starting "newest-cni-444637" primary control-plane node in "newest-cni-444637" cluster
	I1018 09:17:54.594724  330193 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 09:17:54.596122  330193 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 09:17:54.597292  330193 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:17:54.597335  330193 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-5897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 09:17:54.597376  330193 cache.go:58] Caching tarball of preloaded images
	I1018 09:17:54.597366  330193 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 09:17:54.597499  330193 preload.go:233] Found /home/jenkins/minikube-integration/21767-5897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 09:17:54.597519  330193 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 09:17:54.597628  330193 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/config.json ...
	I1018 09:17:54.619906  330193 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 09:17:54.619924  330193 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 09:17:54.619939  330193 cache.go:232] Successfully downloaded all kic artifacts
	I1018 09:17:54.619961  330193 start.go:360] acquireMachinesLock for newest-cni-444637: {Name:mkf6974ca6fc7b22cdf212b383f50d3f090ea59b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:17:54.620020  330193 start.go:364] duration metric: took 43.166µs to acquireMachinesLock for "newest-cni-444637"
	I1018 09:17:54.620037  330193 start.go:96] Skipping create...Using existing machine configuration
	I1018 09:17:54.620042  330193 fix.go:54] fixHost starting: 
	I1018 09:17:54.620234  330193 cli_runner.go:164] Run: docker container inspect newest-cni-444637 --format={{.State.Status}}
	I1018 09:17:54.638627  330193 fix.go:112] recreateIfNeeded on newest-cni-444637: state=Stopped err=<nil>
	W1018 09:17:54.638652  330193 fix.go:138] unexpected machine state, will restart: <nil>
	W1018 09:17:51.833553  318609 pod_ready.go:104] pod "coredns-66bc5c9577-7fnw7" is not "Ready", error: <nil>
	W1018 09:17:53.833757  318609 pod_ready.go:104] pod "coredns-66bc5c9577-7fnw7" is not "Ready", error: <nil>
	W1018 09:17:56.034833  324191 pod_ready.go:104] pod "coredns-66bc5c9577-bpcsk" is not "Ready", error: <nil>
	W1018 09:17:58.534991  324191 pod_ready.go:104] pod "coredns-66bc5c9577-bpcsk" is not "Ready", error: <nil>
	I1018 09:17:54.640543  330193 out.go:252] * Restarting existing docker container for "newest-cni-444637" ...
	I1018 09:17:54.640644  330193 cli_runner.go:164] Run: docker start newest-cni-444637
	I1018 09:17:54.903916  330193 cli_runner.go:164] Run: docker container inspect newest-cni-444637 --format={{.State.Status}}
	I1018 09:17:54.923445  330193 kic.go:430] container "newest-cni-444637" state is running.
	I1018 09:17:54.923919  330193 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-444637
	I1018 09:17:54.944878  330193 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/config.json ...
	I1018 09:17:54.945143  330193 machine.go:93] provisionDockerMachine start ...
	I1018 09:17:54.945221  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:17:54.965135  330193 main.go:141] libmachine: Using SSH client type: native
	I1018 09:17:54.965422  330193 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1018 09:17:54.965438  330193 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:17:54.966008  330193 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59674->127.0.0.1:33133: read: connection reset by peer
	I1018 09:17:58.102821  330193 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-444637
	
	I1018 09:17:58.102846  330193 ubuntu.go:182] provisioning hostname "newest-cni-444637"
	I1018 09:17:58.102902  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:17:58.121992  330193 main.go:141] libmachine: Using SSH client type: native
	I1018 09:17:58.122251  330193 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1018 09:17:58.122274  330193 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-444637 && echo "newest-cni-444637" | sudo tee /etc/hostname
	I1018 09:17:58.271611  330193 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-444637
	
	I1018 09:17:58.271696  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:17:58.295116  330193 main.go:141] libmachine: Using SSH client type: native
	I1018 09:17:58.295331  330193 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1018 09:17:58.295366  330193 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-444637' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-444637/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-444637' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:17:58.435338  330193 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:17:58.435406  330193 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-5897/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-5897/.minikube}
	I1018 09:17:58.435457  330193 ubuntu.go:190] setting up certificates
	I1018 09:17:58.435470  330193 provision.go:84] configureAuth start
	I1018 09:17:58.435550  330193 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-444637
	I1018 09:17:58.454683  330193 provision.go:143] copyHostCerts
	I1018 09:17:58.454758  330193 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem, removing ...
	I1018 09:17:58.454789  330193 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem
	I1018 09:17:58.454878  330193 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/key.pem (1675 bytes)
	I1018 09:17:58.455021  330193 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem, removing ...
	I1018 09:17:58.455032  330193 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem
	I1018 09:17:58.455077  330193 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/ca.pem (1078 bytes)
	I1018 09:17:58.455176  330193 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem, removing ...
	I1018 09:17:58.455185  330193 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem
	I1018 09:17:58.455229  330193 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-5897/.minikube/cert.pem (1123 bytes)
	I1018 09:17:58.455323  330193 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem org=jenkins.newest-cni-444637 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-444637]
	I1018 09:17:58.651717  330193 provision.go:177] copyRemoteCerts
	I1018 09:17:58.651791  330193 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:17:58.651850  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:17:58.670990  330193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/newest-cni-444637/id_rsa Username:docker}
	I1018 09:17:58.769295  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:17:58.788403  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 09:17:58.807495  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 09:17:58.826308  330193 provision.go:87] duration metric: took 390.822036ms to configureAuth
	I1018 09:17:58.826335  330193 ubuntu.go:206] setting minikube options for container-runtime
	I1018 09:17:58.826534  330193 config.go:182] Loaded profile config "newest-cni-444637": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:17:58.826624  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:17:58.845940  330193 main.go:141] libmachine: Using SSH client type: native
	I1018 09:17:58.846169  330193 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1018 09:17:58.846191  330193 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:17:59.117215  330193 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:17:59.117238  330193 machine.go:96] duration metric: took 4.172078969s to provisionDockerMachine
	I1018 09:17:59.117253  330193 start.go:293] postStartSetup for "newest-cni-444637" (driver="docker")
	I1018 09:17:59.117266  330193 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:17:59.117338  330193 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:17:59.117401  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:17:59.136996  330193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/newest-cni-444637/id_rsa Username:docker}
	I1018 09:17:59.235549  330193 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:17:59.239452  330193 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 09:17:59.239483  330193 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 09:17:59.239505  330193 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-5897/.minikube/addons for local assets ...
	I1018 09:17:59.239563  330193 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-5897/.minikube/files for local assets ...
	I1018 09:17:59.239658  330193 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem -> 93942.pem in /etc/ssl/certs
	I1018 09:17:59.239788  330193 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:17:59.248379  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem --> /etc/ssl/certs/93942.pem (1708 bytes)
	I1018 09:17:59.268012  330193 start.go:296] duration metric: took 150.737252ms for postStartSetup
	I1018 09:17:59.268099  330193 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:17:59.268146  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:17:59.287401  330193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/newest-cni-444637/id_rsa Username:docker}
	I1018 09:17:59.382795  330193 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 09:17:59.388305  330193 fix.go:56] duration metric: took 4.768253133s for fixHost
	I1018 09:17:59.388338  330193 start.go:83] releasing machines lock for "newest-cni-444637", held for 4.76830641s
	I1018 09:17:59.388481  330193 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-444637
	I1018 09:17:59.407756  330193 ssh_runner.go:195] Run: cat /version.json
	I1018 09:17:59.407798  330193 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:17:59.407876  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:17:59.407803  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	W1018 09:17:56.333478  318609 pod_ready.go:104] pod "coredns-66bc5c9577-7fnw7" is not "Ready", error: <nil>
	I1018 09:17:58.333556  318609 pod_ready.go:94] pod "coredns-66bc5c9577-7fnw7" is "Ready"
	I1018 09:17:58.333585  318609 pod_ready.go:86] duration metric: took 36.506179321s for pod "coredns-66bc5c9577-7fnw7" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:58.336410  318609 pod_ready.go:83] waiting for pod "etcd-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:58.341932  318609 pod_ready.go:94] pod "etcd-embed-certs-880603" is "Ready"
	I1018 09:17:58.341964  318609 pod_ready.go:86] duration metric: took 5.525225ms for pod "etcd-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:58.344669  318609 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:58.349852  318609 pod_ready.go:94] pod "kube-apiserver-embed-certs-880603" is "Ready"
	I1018 09:17:58.349882  318609 pod_ready.go:86] duration metric: took 5.170321ms for pod "kube-apiserver-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:58.352067  318609 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:58.532002  318609 pod_ready.go:94] pod "kube-controller-manager-embed-certs-880603" is "Ready"
	I1018 09:17:58.532034  318609 pod_ready.go:86] duration metric: took 179.946406ms for pod "kube-controller-manager-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:58.732243  318609 pod_ready.go:83] waiting for pod "kube-proxy-k4kcs" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:59.131632  318609 pod_ready.go:94] pod "kube-proxy-k4kcs" is "Ready"
	I1018 09:17:59.131665  318609 pod_ready.go:86] duration metric: took 399.394452ms for pod "kube-proxy-k4kcs" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:59.332088  318609 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:59.734805  318609 pod_ready.go:94] pod "kube-scheduler-embed-certs-880603" is "Ready"
	I1018 09:17:59.734842  318609 pod_ready.go:86] duration metric: took 402.724813ms for pod "kube-scheduler-embed-certs-880603" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:17:59.734856  318609 pod_ready.go:40] duration metric: took 37.912005765s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:17:59.783224  318609 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 09:17:59.785136  318609 out.go:179] * Done! kubectl is now configured to use "embed-certs-880603" cluster and "default" namespace by default
	I1018 09:17:59.428145  330193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/newest-cni-444637/id_rsa Username:docker}
	I1018 09:17:59.430455  330193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/newest-cni-444637/id_rsa Username:docker}
	I1018 09:17:59.580030  330193 ssh_runner.go:195] Run: systemctl --version
	I1018 09:17:59.587085  330193 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:17:59.625510  330193 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:17:59.630784  330193 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:17:59.630846  330193 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:17:59.639622  330193 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 09:17:59.639650  330193 start.go:495] detecting cgroup driver to use...
	I1018 09:17:59.639695  330193 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 09:17:59.639752  330193 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:17:59.654825  330193 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:17:59.668280  330193 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:17:59.668366  330193 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:17:59.683973  330193 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:17:59.698385  330193 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:17:59.790586  330193 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:17:59.892076  330193 docker.go:234] disabling docker service ...
	I1018 09:17:59.892147  330193 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:17:59.908881  330193 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:17:59.922861  330193 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:18:00.012767  330193 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:18:00.112051  330193 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:18:00.125686  330193 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:18:00.142184  330193 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 09:18:00.142248  330193 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:18:00.153446  330193 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 09:18:00.153510  330193 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:18:00.163772  330193 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:18:00.173529  330193 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:18:00.183180  330193 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:18:00.192357  330193 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:18:00.202160  330193 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:18:00.211313  330193 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:18:00.221003  330193 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:18:00.229269  330193 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:18:00.238137  330193 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:18:00.320620  330193 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:18:00.435033  330193 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:18:00.435106  330193 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:18:00.439539  330193 start.go:563] Will wait 60s for crictl version
	I1018 09:18:00.439606  330193 ssh_runner.go:195] Run: which crictl
	I1018 09:18:00.443682  330193 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 09:18:00.469987  330193 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 09:18:00.470070  330193 ssh_runner.go:195] Run: crio --version
	I1018 09:18:00.500186  330193 ssh_runner.go:195] Run: crio --version
	I1018 09:18:00.531772  330193 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 09:18:00.533155  330193 cli_runner.go:164] Run: docker network inspect newest-cni-444637 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:18:00.552284  330193 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1018 09:18:00.556833  330193 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:18:00.569469  330193 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1018 09:18:00.570643  330193 kubeadm.go:883] updating cluster {Name:newest-cni-444637 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-444637 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:18:00.570761  330193 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:18:00.570826  330193 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:18:00.604611  330193 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:18:00.604633  330193 crio.go:433] Images already preloaded, skipping extraction
	I1018 09:18:00.604679  330193 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:18:00.632395  330193 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:18:00.632438  330193 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:18:00.632446  330193 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1018 09:18:00.632555  330193 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-444637 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-444637 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:18:00.632630  330193 ssh_runner.go:195] Run: crio config
	I1018 09:18:00.683711  330193 cni.go:84] Creating CNI manager for ""
	I1018 09:18:00.683732  330193 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:18:00.683746  330193 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1018 09:18:00.683770  330193 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-444637 NodeName:newest-cni-444637 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:18:00.683897  330193 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-444637"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:18:00.683961  330193 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 09:18:00.693538  330193 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:18:00.693611  330193 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:18:00.701785  330193 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1018 09:18:00.715623  330193 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:18:00.729315  330193 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1018 09:18:00.742706  330193 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1018 09:18:00.746993  330193 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:18:00.758274  330193 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:18:00.846197  330193 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:18:00.874953  330193 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637 for IP: 192.168.103.2
	I1018 09:18:00.874980  330193 certs.go:195] generating shared ca certs ...
	I1018 09:18:00.875000  330193 certs.go:227] acquiring lock for ca certs: {Name:mk550b60d986fbbdf7b5e0015c56234b739f3162 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:18:00.875152  330193 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-5897/.minikube/ca.key
	I1018 09:18:00.875197  330193 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.key
	I1018 09:18:00.875207  330193 certs.go:257] generating profile certs ...
	I1018 09:18:00.875295  330193 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/client.key
	I1018 09:18:00.875391  330193 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/apiserver.key.d9d366ba
	I1018 09:18:00.875439  330193 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/proxy-client.key
	I1018 09:18:00.875557  330193 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/9394.pem (1338 bytes)
	W1018 09:18:00.875586  330193 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-5897/.minikube/certs/9394_empty.pem, impossibly tiny 0 bytes
	I1018 09:18:00.875596  330193 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 09:18:00.875619  330193 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:18:00.875641  330193 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:18:00.875661  330193 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/certs/key.pem (1675 bytes)
	I1018 09:18:00.875704  330193 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem (1708 bytes)
	I1018 09:18:00.876245  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:18:00.896645  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 09:18:00.916475  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:18:00.937413  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 09:18:00.962164  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1018 09:18:00.982149  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 09:18:01.001065  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:18:01.021602  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/newest-cni-444637/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 09:18:01.041260  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:18:01.060553  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/certs/9394.pem --> /usr/share/ca-certificates/9394.pem (1338 bytes)
	I1018 09:18:01.080521  330193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/ssl/certs/93942.pem --> /usr/share/ca-certificates/93942.pem (1708 bytes)
	I1018 09:18:01.099406  330193 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:18:01.112902  330193 ssh_runner.go:195] Run: openssl version
	I1018 09:18:01.119558  330193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9394.pem && ln -fs /usr/share/ca-certificates/9394.pem /etc/ssl/certs/9394.pem"
	I1018 09:18:01.128761  330193 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9394.pem
	I1018 09:18:01.133075  330193 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 08:35 /usr/share/ca-certificates/9394.pem
	I1018 09:18:01.133130  330193 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9394.pem
	I1018 09:18:01.169581  330193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9394.pem /etc/ssl/certs/51391683.0"
	I1018 09:18:01.178326  330193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93942.pem && ln -fs /usr/share/ca-certificates/93942.pem /etc/ssl/certs/93942.pem"
	I1018 09:18:01.187653  330193 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93942.pem
	I1018 09:18:01.191858  330193 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 08:35 /usr/share/ca-certificates/93942.pem
	I1018 09:18:01.191912  330193 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93942.pem
	I1018 09:18:01.227900  330193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93942.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:18:01.236865  330193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:18:01.245974  330193 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:18:01.250554  330193 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:18:01.250615  330193 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:18:01.285905  330193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:18:01.295059  330193 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:18:01.299170  330193 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 09:18:01.334401  330193 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 09:18:01.369411  330193 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 09:18:01.417245  330193 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 09:18:01.463956  330193 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 09:18:01.519260  330193 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 09:18:01.564643  330193 kubeadm.go:400] StartCluster: {Name:newest-cni-444637 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-444637 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:18:01.564725  330193 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:18:01.564799  330193 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:18:01.596025  330193 cri.go:89] found id: "014aa61b2a700319893c6b11615bd85925597f0738e1bf960657bba99a7ac5ea"
	I1018 09:18:01.596053  330193 cri.go:89] found id: "b91ae2df424fdafe037b2eea7a39a37f80929e5ab4c76c1169ce7ba3b9a4bdbd"
	I1018 09:18:01.596059  330193 cri.go:89] found id: "49cf2e65f5a6801e31db940684d60041512ed73bbf34778abdfd8025afc8b25b"
	I1018 09:18:01.596064  330193 cri.go:89] found id: "390882244d27208c7b2d7d0538a0ff970ed197d0a63b391f3e1c81bd7b8255df"
	I1018 09:18:01.596069  330193 cri.go:89] found id: ""
	I1018 09:18:01.596114  330193 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 09:18:01.609602  330193 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:18:01Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:18:01.609687  330193 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:18:01.619278  330193 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 09:18:01.619297  330193 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 09:18:01.619376  330193 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 09:18:01.628525  330193 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 09:18:01.629710  330193 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-444637" does not appear in /home/jenkins/minikube-integration/21767-5897/kubeconfig
	I1018 09:18:01.630508  330193 kubeconfig.go:62] /home/jenkins/minikube-integration/21767-5897/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-444637" cluster setting kubeconfig missing "newest-cni-444637" context setting]
	I1018 09:18:01.631708  330193 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/kubeconfig: {Name:mkbdf22e0f6c3f9f36e4cde3352b620d43ace448 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:18:01.633868  330193 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 09:18:01.643225  330193 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.103.2
	I1018 09:18:01.643268  330193 kubeadm.go:601] duration metric: took 23.964839ms to restartPrimaryControlPlane
	I1018 09:18:01.643282  330193 kubeadm.go:402] duration metric: took 78.647978ms to StartCluster
	I1018 09:18:01.643303  330193 settings.go:142] acquiring lock: {Name:mk177870d6cf7000f95346d8b9c104ade730278a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:18:01.643398  330193 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-5897/kubeconfig
	I1018 09:18:01.645409  330193 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-5897/kubeconfig: {Name:mkbdf22e0f6c3f9f36e4cde3352b620d43ace448 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:18:01.645688  330193 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:18:01.645769  330193 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 09:18:01.645862  330193 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-444637"
	I1018 09:18:01.645882  330193 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-444637"
	W1018 09:18:01.645893  330193 addons.go:247] addon storage-provisioner should already be in state true
	I1018 09:18:01.645893  330193 addons.go:69] Setting dashboard=true in profile "newest-cni-444637"
	I1018 09:18:01.645921  330193 host.go:66] Checking if "newest-cni-444637" exists ...
	I1018 09:18:01.645934  330193 addons.go:238] Setting addon dashboard=true in "newest-cni-444637"
	W1018 09:18:01.645945  330193 addons.go:247] addon dashboard should already be in state true
	I1018 09:18:01.645945  330193 config.go:182] Loaded profile config "newest-cni-444637": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:18:01.645948  330193 addons.go:69] Setting default-storageclass=true in profile "newest-cni-444637"
	I1018 09:18:01.645973  330193 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-444637"
	I1018 09:18:01.645980  330193 host.go:66] Checking if "newest-cni-444637" exists ...
	I1018 09:18:01.646303  330193 cli_runner.go:164] Run: docker container inspect newest-cni-444637 --format={{.State.Status}}
	I1018 09:18:01.646463  330193 cli_runner.go:164] Run: docker container inspect newest-cni-444637 --format={{.State.Status}}
	I1018 09:18:01.646481  330193 cli_runner.go:164] Run: docker container inspect newest-cni-444637 --format={{.State.Status}}
	I1018 09:18:01.647698  330193 out.go:179] * Verifying Kubernetes components...
	I1018 09:18:01.649210  330193 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:18:01.673812  330193 addons.go:238] Setting addon default-storageclass=true in "newest-cni-444637"
	W1018 09:18:01.673837  330193 addons.go:247] addon default-storageclass should already be in state true
	I1018 09:18:01.673877  330193 host.go:66] Checking if "newest-cni-444637" exists ...
	I1018 09:18:01.674375  330193 cli_runner.go:164] Run: docker container inspect newest-cni-444637 --format={{.State.Status}}
	I1018 09:18:01.674516  330193 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:18:01.678901  330193 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:18:01.678924  330193 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 09:18:01.678985  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:18:01.679140  330193 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1018 09:18:01.680475  330193 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1018 09:18:01.681672  330193 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1018 09:18:01.681729  330193 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1018 09:18:01.681827  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:18:01.707736  330193 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 09:18:01.707766  330193 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 09:18:01.707826  330193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-444637
	I1018 09:18:01.713270  330193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/newest-cni-444637/id_rsa Username:docker}
	I1018 09:18:01.719016  330193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/newest-cni-444637/id_rsa Username:docker}
	I1018 09:18:01.734187  330193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/newest-cni-444637/id_rsa Username:docker}
	I1018 09:18:01.812631  330193 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:18:01.828229  330193 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:18:01.828317  330193 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:18:01.829858  330193 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:18:01.835854  330193 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1018 09:18:01.835874  330193 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1018 09:18:01.845491  330193 api_server.go:72] duration metric: took 199.769202ms to wait for apiserver process to appear ...
	I1018 09:18:01.845522  330193 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:18:01.845544  330193 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1018 09:18:01.852363  330193 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 09:18:01.854253  330193 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1018 09:18:01.854275  330193 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1018 09:18:01.872324  330193 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1018 09:18:01.872363  330193 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1018 09:18:01.891549  330193 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1018 09:18:01.891576  330193 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1018 09:18:01.910545  330193 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1018 09:18:01.910574  330193 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1018 09:18:01.928312  330193 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1018 09:18:01.928337  330193 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1018 09:18:01.942869  330193 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1018 09:18:01.942897  330193 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1018 09:18:01.957264  330193 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1018 09:18:01.957287  330193 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1018 09:18:01.971834  330193 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 09:18:01.971871  330193 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1018 09:18:01.988808  330193 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 09:18:03.360064  330193 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1018 09:18:03.360099  330193 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1018 09:18:03.360117  330193 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1018 09:18:03.416525  330193 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W1018 09:18:03.416558  330193 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I1018 09:18:03.845768  330193 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1018 09:18:03.850882  330193 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 09:18:03.850913  330193 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 09:18:03.925688  330193 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.095784279s)
	I1018 09:18:03.925778  330193 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.073377378s)
	I1018 09:18:03.925913  330193 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.937061029s)
	I1018 09:18:03.929127  330193 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-444637 addons enable metrics-server
	
	I1018 09:18:03.937380  330193 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1018 09:18:01.035250  324191 pod_ready.go:104] pod "coredns-66bc5c9577-bpcsk" is not "Ready", error: <nil>
	W1018 09:18:03.035670  324191 pod_ready.go:104] pod "coredns-66bc5c9577-bpcsk" is not "Ready", error: <nil>
	I1018 09:18:03.938934  330193 addons.go:514] duration metric: took 2.293172614s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1018 09:18:04.346493  330193 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1018 09:18:04.351148  330193 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 09:18:04.351178  330193 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 09:18:04.845878  330193 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1018 09:18:04.850252  330193 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1018 09:18:04.851396  330193 api_server.go:141] control plane version: v1.34.1
	I1018 09:18:04.851430  330193 api_server.go:131] duration metric: took 3.005900151s to wait for apiserver health ...
	I1018 09:18:04.851440  330193 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:18:04.855053  330193 system_pods.go:59] 8 kube-system pods found
	I1018 09:18:04.855092  330193 system_pods.go:61] "coredns-66bc5c9577-gc5dd" [7fab8a8d-bdb4-47d4-bf7d-d03341018666] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 09:18:04.855100  330193 system_pods.go:61] "etcd-newest-cni-444637" [b54d61ad-b52d-4343-ba3a-a64b03934319] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 09:18:04.855111  330193 system_pods.go:61] "kindnet-qmlcq" [2c82849a-5511-43a1-a300-a7f46df288ec] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1018 09:18:04.855117  330193 system_pods.go:61] "kube-apiserver-newest-cni-444637" [a9136c1f-8962-45f7-b005-05bd3f856403] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 09:18:04.855124  330193 system_pods.go:61] "kube-controller-manager-newest-cni-444637" [b8d840d7-04c3-495c-aafa-cc8a06e58f06] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 09:18:04.855130  330193 system_pods.go:61] "kube-proxy-hbkn5" [d70417da-43f2-4d8c-a088-07cea5225c34] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1018 09:18:04.855138  330193 system_pods.go:61] "kube-scheduler-newest-cni-444637" [175527c5-4260-4e39-be83-4c36417f3cbe] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 09:18:04.855142  330193 system_pods.go:61] "storage-provisioner" [b0974a78-b6ad-45c3-8241-86f8bb7bc65b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 09:18:04.855151  330193 system_pods.go:74] duration metric: took 3.706424ms to wait for pod list to return data ...
	I1018 09:18:04.855162  330193 default_sa.go:34] waiting for default service account to be created ...
	I1018 09:18:04.857785  330193 default_sa.go:45] found service account: "default"
	I1018 09:18:04.857804  330193 default_sa.go:55] duration metric: took 2.636173ms for default service account to be created ...
	I1018 09:18:04.857817  330193 kubeadm.go:586] duration metric: took 3.212102689s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 09:18:04.857837  330193 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:18:04.860449  330193 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 09:18:04.860472  330193 node_conditions.go:123] node cpu capacity is 8
	I1018 09:18:04.860486  330193 node_conditions.go:105] duration metric: took 2.642504ms to run NodePressure ...
	I1018 09:18:04.860498  330193 start.go:241] waiting for startup goroutines ...
	I1018 09:18:04.860504  330193 start.go:246] waiting for cluster config update ...
	I1018 09:18:04.860514  330193 start.go:255] writing updated cluster config ...
	I1018 09:18:04.860806  330193 ssh_runner.go:195] Run: rm -f paused
	I1018 09:18:04.910604  330193 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 09:18:04.913879  330193 out.go:179] * Done! kubectl is now configured to use "newest-cni-444637" cluster and "default" namespace by default
	W1018 09:18:05.535906  324191 pod_ready.go:104] pod "coredns-66bc5c9577-bpcsk" is not "Ready", error: <nil>
	W1018 09:18:08.034961  324191 pod_ready.go:104] pod "coredns-66bc5c9577-bpcsk" is not "Ready", error: <nil>
	W1018 09:18:10.535644  324191 pod_ready.go:104] pod "coredns-66bc5c9577-bpcsk" is not "Ready", error: <nil>
	W1018 09:18:13.036211  324191 pod_ready.go:104] pod "coredns-66bc5c9577-bpcsk" is not "Ready", error: <nil>
	W1018 09:18:15.534175  324191 pod_ready.go:104] pod "coredns-66bc5c9577-bpcsk" is not "Ready", error: <nil>
	W1018 09:18:17.535056  324191 pod_ready.go:104] pod "coredns-66bc5c9577-bpcsk" is not "Ready", error: <nil>
	I1018 09:18:19.034808  324191 pod_ready.go:94] pod "coredns-66bc5c9577-bpcsk" is "Ready"
	I1018 09:18:19.034833  324191 pod_ready.go:86] duration metric: took 38.506130218s for pod "coredns-66bc5c9577-bpcsk" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:18:19.038302  324191 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-986220" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:18:19.044158  324191 pod_ready.go:94] pod "etcd-default-k8s-diff-port-986220" is "Ready"
	I1018 09:18:19.044183  324191 pod_ready.go:86] duration metric: took 5.852883ms for pod "etcd-default-k8s-diff-port-986220" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:18:19.047009  324191 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-986220" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:18:19.052078  324191 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-986220" is "Ready"
	I1018 09:18:19.052102  324191 pod_ready.go:86] duration metric: took 5.068886ms for pod "kube-apiserver-default-k8s-diff-port-986220" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:18:19.054584  324191 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-986220" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:18:19.233033  324191 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-986220" is "Ready"
	I1018 09:18:19.233061  324191 pod_ready.go:86] duration metric: took 178.456789ms for pod "kube-controller-manager-default-k8s-diff-port-986220" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:18:19.433931  324191 pod_ready.go:83] waiting for pod "kube-proxy-vvtpl" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:18:19.832592  324191 pod_ready.go:94] pod "kube-proxy-vvtpl" is "Ready"
	I1018 09:18:19.832619  324191 pod_ready.go:86] duration metric: took 398.658534ms for pod "kube-proxy-vvtpl" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:18:20.033393  324191 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-986220" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:18:20.432734  324191 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-986220" is "Ready"
	I1018 09:18:20.432777  324191 pod_ready.go:86] duration metric: took 399.356966ms for pod "kube-scheduler-default-k8s-diff-port-986220" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:18:20.432793  324191 pod_ready.go:40] duration metric: took 39.908263478s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:18:20.482249  324191 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 09:18:20.484082  324191 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-986220" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 09:17:50 default-k8s-diff-port-986220 crio[557]: time="2025-10-18T09:17:50.37660541Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 09:17:50 default-k8s-diff-port-986220 crio[557]: time="2025-10-18T09:17:50.594915866Z" level=info msg="Removing container: 648ba523e85c42ddd3f874e5a62114aedc60dc27b5aac43dc0f4df0d049c8b51" id=3d29b266-28d4-4b19-94c4-de12c061ec2a name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 09:17:50 default-k8s-diff-port-986220 crio[557]: time="2025-10-18T09:17:50.604576847Z" level=info msg="Removed container 648ba523e85c42ddd3f874e5a62114aedc60dc27b5aac43dc0f4df0d049c8b51: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m92mk/dashboard-metrics-scraper" id=3d29b266-28d4-4b19-94c4-de12c061ec2a name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 09:18:10 default-k8s-diff-port-986220 crio[557]: time="2025-10-18T09:18:10.496618519Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=3a4524f2-9e1f-48a7-8bba-eb8efc905a21 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:18:10 default-k8s-diff-port-986220 crio[557]: time="2025-10-18T09:18:10.49770216Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=3fd14224-5a7d-4c8a-b6b5-6e2e6d159809 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:18:10 default-k8s-diff-port-986220 crio[557]: time="2025-10-18T09:18:10.498769706Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m92mk/dashboard-metrics-scraper" id=8328aacf-fb1b-4c48-a97d-c6ffb0293bdf name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:18:10 default-k8s-diff-port-986220 crio[557]: time="2025-10-18T09:18:10.499056317Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:18:10 default-k8s-diff-port-986220 crio[557]: time="2025-10-18T09:18:10.506274652Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:18:10 default-k8s-diff-port-986220 crio[557]: time="2025-10-18T09:18:10.507122472Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:18:10 default-k8s-diff-port-986220 crio[557]: time="2025-10-18T09:18:10.539442507Z" level=info msg="Created container 327aee2aa78ae38951a7e22143f154bb2a0f00c5ea96263259918fb6abc5b2db: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m92mk/dashboard-metrics-scraper" id=8328aacf-fb1b-4c48-a97d-c6ffb0293bdf name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:18:10 default-k8s-diff-port-986220 crio[557]: time="2025-10-18T09:18:10.540188009Z" level=info msg="Starting container: 327aee2aa78ae38951a7e22143f154bb2a0f00c5ea96263259918fb6abc5b2db" id=71d8f86f-8d7d-4135-8ecb-071f2805bda3 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:18:10 default-k8s-diff-port-986220 crio[557]: time="2025-10-18T09:18:10.542628456Z" level=info msg="Started container" PID=1761 containerID=327aee2aa78ae38951a7e22143f154bb2a0f00c5ea96263259918fb6abc5b2db description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m92mk/dashboard-metrics-scraper id=71d8f86f-8d7d-4135-8ecb-071f2805bda3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1c8591e7cd26473a6f722f35335c41259825ba74a758f9946a72a1b36b6a7ff3
	Oct 18 09:18:10 default-k8s-diff-port-986220 crio[557]: time="2025-10-18T09:18:10.650916263Z" level=info msg="Removing container: ed0214dde9b6c3a395be05c33ba7e949e4400f0bffbec160bb16ac9d6ef8dcb1" id=ce14e8e9-7232-4c15-986b-584751743afb name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 09:18:10 default-k8s-diff-port-986220 crio[557]: time="2025-10-18T09:18:10.651960242Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=6abe7531-5087-4bae-8347-e3966bf3e297 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:18:10 default-k8s-diff-port-986220 crio[557]: time="2025-10-18T09:18:10.652887524Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=0dfb580b-7ba8-4e56-9ff7-36d7ced157c8 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:18:10 default-k8s-diff-port-986220 crio[557]: time="2025-10-18T09:18:10.654148565Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=22a7fe9d-1253-4991-8296-725d4736d557 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:18:10 default-k8s-diff-port-986220 crio[557]: time="2025-10-18T09:18:10.654401697Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:18:10 default-k8s-diff-port-986220 crio[557]: time="2025-10-18T09:18:10.659845136Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:18:10 default-k8s-diff-port-986220 crio[557]: time="2025-10-18T09:18:10.660082125Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/1b1c22127fb2b778be7f78d3b8dea141a3085239500ffff1acde8567eb5a0457/merged/etc/passwd: no such file or directory"
	Oct 18 09:18:10 default-k8s-diff-port-986220 crio[557]: time="2025-10-18T09:18:10.660120642Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/1b1c22127fb2b778be7f78d3b8dea141a3085239500ffff1acde8567eb5a0457/merged/etc/group: no such file or directory"
	Oct 18 09:18:10 default-k8s-diff-port-986220 crio[557]: time="2025-10-18T09:18:10.661178351Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:18:10 default-k8s-diff-port-986220 crio[557]: time="2025-10-18T09:18:10.663421932Z" level=info msg="Removed container ed0214dde9b6c3a395be05c33ba7e949e4400f0bffbec160bb16ac9d6ef8dcb1: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m92mk/dashboard-metrics-scraper" id=ce14e8e9-7232-4c15-986b-584751743afb name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 09:18:10 default-k8s-diff-port-986220 crio[557]: time="2025-10-18T09:18:10.692020899Z" level=info msg="Created container 49ce691fa7cdd9b79bec964c1afdc4a8c154310a7c6ea44e93cd0d76e7d23447: kube-system/storage-provisioner/storage-provisioner" id=22a7fe9d-1253-4991-8296-725d4736d557 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:18:10 default-k8s-diff-port-986220 crio[557]: time="2025-10-18T09:18:10.692739945Z" level=info msg="Starting container: 49ce691fa7cdd9b79bec964c1afdc4a8c154310a7c6ea44e93cd0d76e7d23447" id=388f2115-1f1f-402d-b267-1f668160e9f7 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:18:10 default-k8s-diff-port-986220 crio[557]: time="2025-10-18T09:18:10.694910095Z" level=info msg="Started container" PID=1771 containerID=49ce691fa7cdd9b79bec964c1afdc4a8c154310a7c6ea44e93cd0d76e7d23447 description=kube-system/storage-provisioner/storage-provisioner id=388f2115-1f1f-402d-b267-1f668160e9f7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=80308f3b6b6914952b71bbb297a4c4a8e7bb1ed4c5531ad3a601a8456f2c77af
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	49ce691fa7cdd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           26 seconds ago      Running             storage-provisioner         1                   80308f3b6b691       storage-provisioner                                    kube-system
	327aee2aa78ae       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           26 seconds ago      Exited              dashboard-metrics-scraper   2                   1c8591e7cd264       dashboard-metrics-scraper-6ffb444bf9-m92mk             kubernetes-dashboard
	dd2f1f47e902c       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   50 seconds ago      Running             kubernetes-dashboard        0                   06db64c6d60fe       kubernetes-dashboard-855c9754f9-gwp9p                  kubernetes-dashboard
	514c449ade6a7       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           56 seconds ago      Running             coredns                     0                   e4c12ee041ff0       coredns-66bc5c9577-bpcsk                               kube-system
	0b8ceb3576fbb       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           56 seconds ago      Running             busybox                     1                   9ba81f26a4853       busybox                                                default
	4d5cc19ffee18       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           56 seconds ago      Exited              storage-provisioner         0                   80308f3b6b691       storage-provisioner                                    kube-system
	3e2529aa2dd60       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           56 seconds ago      Running             kindnet-cni                 0                   3cc2a1661e9b0       kindnet-cj6bv                                          kube-system
	efcf153f5528d       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           56 seconds ago      Running             kube-proxy                  0                   8ffb5ff4bab9c       kube-proxy-vvtpl                                       kube-system
	8956123c13137       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           59 seconds ago      Running             kube-controller-manager     0                   21d4c7fb2ed98       kube-controller-manager-default-k8s-diff-port-986220   kube-system
	8d1ab9fe3eb84       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           59 seconds ago      Running             kube-apiserver              0                   4f59887585f9f       kube-apiserver-default-k8s-diff-port-986220            kube-system
	1dc67601595ac       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           59 seconds ago      Running             kube-scheduler              0                   f42d43919f969       kube-scheduler-default-k8s-diff-port-986220            kube-system
	bad27ff83c636       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           59 seconds ago      Running             etcd                        0                   41917a5894ecb       etcd-default-k8s-diff-port-986220                      kube-system
	
	
	==> coredns [514c449ade6a78cd215a5ddfcf373f35a48b107fc90ec5014b5ea1fcf64cfc79] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36493 - 59691 "HINFO IN 4214345634566612825.7215447916724245711. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.114176821s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-986220
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-986220
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a39cecdc22b5fb611b15c7501c7459c3b4d2820
	                    minikube.k8s.io/name=default-k8s-diff-port-986220
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T09_16_42_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 09:16:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-986220
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 09:18:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 09:18:30 +0000   Sat, 18 Oct 2025 09:16:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 09:18:30 +0000   Sat, 18 Oct 2025 09:16:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 09:18:30 +0000   Sat, 18 Oct 2025 09:16:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 09:18:30 +0000   Sat, 18 Oct 2025 09:16:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    default-k8s-diff-port-986220
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                f86ae77e-f46d-47da-846c-c937a0a7701a
	  Boot ID:                    e8d7ef1f-87bb-488c-8381-e18fe85b484f
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-66bc5c9577-bpcsk                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     109s
	  kube-system                 etcd-default-k8s-diff-port-986220                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         115s
	  kube-system                 kindnet-cj6bv                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-default-k8s-diff-port-986220             250m (3%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-986220    200m (2%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-vvtpl                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-default-k8s-diff-port-986220             100m (1%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-m92mk              0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-gwp9p                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 109s                 kube-proxy       
	  Normal  Starting                 56s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m1s (x8 over 2m2s)  kubelet          Node default-k8s-diff-port-986220 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m1s (x8 over 2m2s)  kubelet          Node default-k8s-diff-port-986220 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m1s (x8 over 2m2s)  kubelet          Node default-k8s-diff-port-986220 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     115s                 kubelet          Node default-k8s-diff-port-986220 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  115s                 kubelet          Node default-k8s-diff-port-986220 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s                 kubelet          Node default-k8s-diff-port-986220 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 115s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           111s                 node-controller  Node default-k8s-diff-port-986220 event: Registered Node default-k8s-diff-port-986220 in Controller
	  Normal  NodeReady                98s                  kubelet          Node default-k8s-diff-port-986220 status is now: NodeReady
	  Normal  Starting                 60s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s (x8 over 60s)    kubelet          Node default-k8s-diff-port-986220 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x8 over 60s)    kubelet          Node default-k8s-diff-port-986220 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x8 over 60s)    kubelet          Node default-k8s-diff-port-986220 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           54s                  node-controller  Node default-k8s-diff-port-986220 event: Registered Node default-k8s-diff-port-986220 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3e 73 e0 1e 4f 5c 08 06
	[  +0.001176] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9e 01 6a be c1 ed 08 06
	[  +1.096145] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 92 07 d0 c5 bc 08 06
	[  +0.000393] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff de 8d 0a a3 cc 78 08 06
	[ +17.591772] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 8a 16 36 e8 43 c0 08 06
	[  +0.000413] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3e 73 e0 1e 4f 5c 08 06
	[ +11.820741] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 36 5b 8e 46 ea 47 08 06
	[Oct18 09:15] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 28 86 11 d4 e9 08 06
	[  +0.032974] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 76 2d 83 26 2e 28 08 06
	[  +4.435535] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 22 e2 07 5a 3b 4a 08 06
	[  +0.000382] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 36 5b 8e 46 ea 47 08 06
	[ +43.809014] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 22 6f 4b 2b 7f 46 08 06
	[  +0.000367] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 62 28 86 11 d4 e9 08 06
	
	
	==> etcd [bad27ff83c63687be534ccd3f079002f13a4d8cf081095fd1e212a53f3010fbf] <==
	{"level":"warn","ts":"2025-10-18T09:17:38.310518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:38.326277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:38.335964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:38.345317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:38.352490Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:38.361029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:38.369133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:38.377163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:38.387201Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:38.396827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:38.405733Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:38.414387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:38.423709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:38.432277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:38.452067Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:38.460714Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:38.472996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:38.482595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:38.492484Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:38.509006Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:38.516015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:38.532330Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:38.541563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:38.550863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:17:38.606924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54504","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:18:36 up  1:01,  0 user,  load average: 3.29, 3.57, 2.53
	Linux default-k8s-diff-port-986220 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3e2529aa2dd60af7f9c954b73b314b5ec999e7f5a4e0b8dd5a9e4f8b4143a321] <==
	I1018 09:17:40.148446       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 09:17:40.148793       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1018 09:17:40.149053       1 main.go:148] setting mtu 1500 for CNI 
	I1018 09:17:40.149070       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 09:17:40.149096       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T09:17:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 09:17:40.353944       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 09:17:40.353989       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 09:17:40.354004       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 09:17:40.386732       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 09:17:40.686791       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 09:17:40.686835       1 metrics.go:72] Registering metrics
	I1018 09:17:40.686971       1 controller.go:711] "Syncing nftables rules"
	I1018 09:17:50.352810       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1018 09:17:50.352855       1 main.go:301] handling current node
	I1018 09:18:00.360513       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1018 09:18:00.360545       1 main.go:301] handling current node
	I1018 09:18:10.353758       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1018 09:18:10.353798       1 main.go:301] handling current node
	I1018 09:18:20.353484       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1018 09:18:20.353534       1 main.go:301] handling current node
	I1018 09:18:30.353785       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1018 09:18:30.353820       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8d1ab9fe3eb84ef483a99bbfe79d01dfa34dfdff518ca313e3c2299c6723b35e] <==
	I1018 09:17:39.185256       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1018 09:17:39.185322       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 09:17:39.185403       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1018 09:17:39.185438       1 aggregator.go:171] initial CRD sync complete...
	I1018 09:17:39.185449       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 09:17:39.185456       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 09:17:39.185470       1 cache.go:39] Caches are synced for autoregister controller
	I1018 09:17:39.185542       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1018 09:17:39.186043       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1018 09:17:39.189880       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 09:17:39.197813       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1018 09:17:39.219801       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1018 09:17:39.219917       1 policy_source.go:240] refreshing policies
	I1018 09:17:39.236012       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 09:17:39.590292       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 09:17:39.642070       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 09:17:39.654077       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 09:17:39.695440       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 09:17:39.705324       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 09:17:39.764270       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.6.150"}
	I1018 09:17:39.779745       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.79.212"}
	I1018 09:17:40.091330       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 09:17:42.586953       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 09:17:42.943107       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 09:17:42.988673       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [8956123c1313708cc585f6ee981938531d1fde0ef837a5cdbf5b02ab1fb0c549] <==
	I1018 09:17:42.533562       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 09:17:42.533579       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 09:17:42.533628       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 09:17:42.534139       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 09:17:42.534196       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 09:17:42.534465       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 09:17:42.534515       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 09:17:42.534543       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 09:17:42.534878       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 09:17:42.536015       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 09:17:42.536186       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:17:42.538227       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 09:17:42.538367       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 09:17:42.538488       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-986220"
	I1018 09:17:42.538545       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1018 09:17:42.539317       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:17:42.541583       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1018 09:17:42.541601       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 09:17:42.541633       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 09:17:42.541669       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 09:17:42.542776       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 09:17:42.546160       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 09:17:42.548182       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 09:17:42.550499       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 09:17:42.562824       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [efcf153f5528d91cf81fb7b54240b482e4822aa80a11aa28014d0e8723503d50] <==
	I1018 09:17:39.923303       1 server_linux.go:53] "Using iptables proxy"
	I1018 09:17:39.982912       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 09:17:40.083338       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 09:17:40.083509       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1018 09:17:40.083751       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 09:17:40.109977       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 09:17:40.110061       1 server_linux.go:132] "Using iptables Proxier"
	I1018 09:17:40.117018       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 09:17:40.117444       1 server.go:527] "Version info" version="v1.34.1"
	I1018 09:17:40.117822       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:17:40.121267       1 config.go:200] "Starting service config controller"
	I1018 09:17:40.122834       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 09:17:40.122413       1 config.go:106] "Starting endpoint slice config controller"
	I1018 09:17:40.122932       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 09:17:40.122425       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 09:17:40.122945       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 09:17:40.121937       1 config.go:309] "Starting node config controller"
	I1018 09:17:40.122952       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 09:17:40.122986       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 09:17:40.223272       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 09:17:40.223295       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 09:17:40.223326       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [1dc67601595acad3b95b404bf690768d89426dc4a4256db06ee931235af514af] <==
	I1018 09:17:37.264655       1 serving.go:386] Generated self-signed cert in-memory
	I1018 09:17:39.549049       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 09:17:39.549089       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:17:39.555435       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1018 09:17:39.555769       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1018 09:17:39.555634       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 09:17:39.555866       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 09:17:39.555591       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:17:39.556403       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:17:39.556909       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 09:17:39.557001       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 09:17:39.656132       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1018 09:17:39.656979       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 09:17:39.660112       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 09:17:43 default-k8s-diff-port-986220 kubelet[714]: I1018 09:17:43.270781     714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdszw\" (UniqueName: \"kubernetes.io/projected/c10f0845-9777-48ac-b709-3775518d787b-kube-api-access-kdszw\") pod \"kubernetes-dashboard-855c9754f9-gwp9p\" (UID: \"c10f0845-9777-48ac-b709-3775518d787b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-gwp9p"
	Oct 18 09:17:47 default-k8s-diff-port-986220 kubelet[714]: I1018 09:17:47.596247     714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-gwp9p" podStartSLOduration=1.37645028 podStartE2EDuration="4.596219607s" podCreationTimestamp="2025-10-18 09:17:43 +0000 UTC" firstStartedPulling="2025-10-18 09:17:43.436776339 +0000 UTC m=+7.033609121" lastFinishedPulling="2025-10-18 09:17:46.656545659 +0000 UTC m=+10.253378448" observedRunningTime="2025-10-18 09:17:47.595753413 +0000 UTC m=+11.192586205" watchObservedRunningTime="2025-10-18 09:17:47.596219607 +0000 UTC m=+11.193052400"
	Oct 18 09:17:48 default-k8s-diff-port-986220 kubelet[714]: I1018 09:17:48.751248     714 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 18 09:17:49 default-k8s-diff-port-986220 kubelet[714]: I1018 09:17:49.588914     714 scope.go:117] "RemoveContainer" containerID="648ba523e85c42ddd3f874e5a62114aedc60dc27b5aac43dc0f4df0d049c8b51"
	Oct 18 09:17:50 default-k8s-diff-port-986220 kubelet[714]: I1018 09:17:50.593387     714 scope.go:117] "RemoveContainer" containerID="648ba523e85c42ddd3f874e5a62114aedc60dc27b5aac43dc0f4df0d049c8b51"
	Oct 18 09:17:50 default-k8s-diff-port-986220 kubelet[714]: I1018 09:17:50.593787     714 scope.go:117] "RemoveContainer" containerID="ed0214dde9b6c3a395be05c33ba7e949e4400f0bffbec160bb16ac9d6ef8dcb1"
	Oct 18 09:17:50 default-k8s-diff-port-986220 kubelet[714]: E1018 09:17:50.593949     714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-m92mk_kubernetes-dashboard(73227830-04e5-4436-88a9-fbfc12df2b00)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m92mk" podUID="73227830-04e5-4436-88a9-fbfc12df2b00"
	Oct 18 09:17:51 default-k8s-diff-port-986220 kubelet[714]: I1018 09:17:51.597992     714 scope.go:117] "RemoveContainer" containerID="ed0214dde9b6c3a395be05c33ba7e949e4400f0bffbec160bb16ac9d6ef8dcb1"
	Oct 18 09:17:51 default-k8s-diff-port-986220 kubelet[714]: E1018 09:17:51.598150     714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-m92mk_kubernetes-dashboard(73227830-04e5-4436-88a9-fbfc12df2b00)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m92mk" podUID="73227830-04e5-4436-88a9-fbfc12df2b00"
	Oct 18 09:17:55 default-k8s-diff-port-986220 kubelet[714]: I1018 09:17:55.527819     714 scope.go:117] "RemoveContainer" containerID="ed0214dde9b6c3a395be05c33ba7e949e4400f0bffbec160bb16ac9d6ef8dcb1"
	Oct 18 09:17:55 default-k8s-diff-port-986220 kubelet[714]: E1018 09:17:55.528134     714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-m92mk_kubernetes-dashboard(73227830-04e5-4436-88a9-fbfc12df2b00)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m92mk" podUID="73227830-04e5-4436-88a9-fbfc12df2b00"
	Oct 18 09:18:10 default-k8s-diff-port-986220 kubelet[714]: I1018 09:18:10.496035     714 scope.go:117] "RemoveContainer" containerID="ed0214dde9b6c3a395be05c33ba7e949e4400f0bffbec160bb16ac9d6ef8dcb1"
	Oct 18 09:18:10 default-k8s-diff-port-986220 kubelet[714]: I1018 09:18:10.649351     714 scope.go:117] "RemoveContainer" containerID="ed0214dde9b6c3a395be05c33ba7e949e4400f0bffbec160bb16ac9d6ef8dcb1"
	Oct 18 09:18:10 default-k8s-diff-port-986220 kubelet[714]: I1018 09:18:10.649594     714 scope.go:117] "RemoveContainer" containerID="327aee2aa78ae38951a7e22143f154bb2a0f00c5ea96263259918fb6abc5b2db"
	Oct 18 09:18:10 default-k8s-diff-port-986220 kubelet[714]: E1018 09:18:10.649817     714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-m92mk_kubernetes-dashboard(73227830-04e5-4436-88a9-fbfc12df2b00)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m92mk" podUID="73227830-04e5-4436-88a9-fbfc12df2b00"
	Oct 18 09:18:10 default-k8s-diff-port-986220 kubelet[714]: I1018 09:18:10.651516     714 scope.go:117] "RemoveContainer" containerID="4d5cc19ffee186783c97a10c5f2a7ef492399eb6f8acbfa30d889652dbfdcd2b"
	Oct 18 09:18:15 default-k8s-diff-port-986220 kubelet[714]: I1018 09:18:15.527654     714 scope.go:117] "RemoveContainer" containerID="327aee2aa78ae38951a7e22143f154bb2a0f00c5ea96263259918fb6abc5b2db"
	Oct 18 09:18:15 default-k8s-diff-port-986220 kubelet[714]: E1018 09:18:15.527855     714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-m92mk_kubernetes-dashboard(73227830-04e5-4436-88a9-fbfc12df2b00)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m92mk" podUID="73227830-04e5-4436-88a9-fbfc12df2b00"
	Oct 18 09:18:27 default-k8s-diff-port-986220 kubelet[714]: I1018 09:18:27.495925     714 scope.go:117] "RemoveContainer" containerID="327aee2aa78ae38951a7e22143f154bb2a0f00c5ea96263259918fb6abc5b2db"
	Oct 18 09:18:27 default-k8s-diff-port-986220 kubelet[714]: E1018 09:18:27.496114     714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-m92mk_kubernetes-dashboard(73227830-04e5-4436-88a9-fbfc12df2b00)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m92mk" podUID="73227830-04e5-4436-88a9-fbfc12df2b00"
	Oct 18 09:18:32 default-k8s-diff-port-986220 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 09:18:32 default-k8s-diff-port-986220 kubelet[714]: I1018 09:18:32.538981     714 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 18 09:18:32 default-k8s-diff-port-986220 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 09:18:32 default-k8s-diff-port-986220 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 18 09:18:32 default-k8s-diff-port-986220 systemd[1]: kubelet.service: Consumed 1.906s CPU time.
	
	
	==> kubernetes-dashboard [dd2f1f47e902c1cbe5cb90ca529db2c31f57a6d6f5fdebcf2ed75577b59a049b] <==
	2025/10/18 09:17:46 Starting overwatch
	2025/10/18 09:17:46 Using namespace: kubernetes-dashboard
	2025/10/18 09:17:46 Using in-cluster config to connect to apiserver
	2025/10/18 09:17:46 Using secret token for csrf signing
	2025/10/18 09:17:46 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 09:17:46 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 09:17:46 Successful initial request to the apiserver, version: v1.34.1
	2025/10/18 09:17:46 Generating JWE encryption key
	2025/10/18 09:17:46 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 09:17:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 09:17:47 Initializing JWE encryption key from synchronized object
	2025/10/18 09:17:47 Creating in-cluster Sidecar client
	2025/10/18 09:17:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 09:17:47 Serving insecurely on HTTP port: 9090
	2025/10/18 09:18:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [49ce691fa7cdd9b79bec964c1afdc4a8c154310a7c6ea44e93cd0d76e7d23447] <==
	I1018 09:18:10.708505       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 09:18:10.717040       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 09:18:10.717142       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 09:18:10.719807       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:18:14.175658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:18:18.436412       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:18:22.035510       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:18:25.089296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:18:28.112290       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:18:28.117322       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 09:18:28.117503       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 09:18:28.117601       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b90e1b13-d855-40dd-8fdf-9ac19eb23314", APIVersion:"v1", ResourceVersion:"665", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-986220_64931e38-d0cf-47be-8edb-003eefbc390c became leader
	I1018 09:18:28.117661       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-986220_64931e38-d0cf-47be-8edb-003eefbc390c!
	W1018 09:18:28.120484       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:18:28.124185       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 09:18:28.218310       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-986220_64931e38-d0cf-47be-8edb-003eefbc390c!
	W1018 09:18:30.127257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:18:30.134244       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:18:32.138522       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:18:32.142936       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:18:34.146090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:18:34.150488       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:18:36.154461       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:18:36.159840       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [4d5cc19ffee186783c97a10c5f2a7ef492399eb6f8acbfa30d889652dbfdcd2b] <==
	I1018 09:17:39.890190       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 09:18:09.892944       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-986220 -n default-k8s-diff-port-986220
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-986220 -n default-k8s-diff-port-986220: exit status 2 (324.7975ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-986220 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (5.51s)

                                                
                                    

Test pass (264/327)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 4.28
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.06
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.1/json-events 4.69
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.06
18 TestDownloadOnly/v1.34.1/DeleteAll 0.22
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
20 TestDownloadOnlyKic 0.39
21 TestBinaryMirror 0.81
22 TestOffline 87.96
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 144.97
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 7.45
48 TestAddons/StoppedEnableDisable 16.72
49 TestCertOptions 26.58
50 TestCertExpiration 214.42
52 TestForceSystemdFlag 28.93
53 TestForceSystemdEnv 25.49
55 TestKVMDriverInstallOrUpdate 1.08
59 TestErrorSpam/setup 20.92
60 TestErrorSpam/start 0.64
61 TestErrorSpam/status 0.93
62 TestErrorSpam/pause 6.44
63 TestErrorSpam/unpause 5.61
64 TestErrorSpam/stop 2.6
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 37.13
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 6.24
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.07
75 TestFunctional/serial/CacheCmd/cache/add_remote 2.77
76 TestFunctional/serial/CacheCmd/cache/add_local 1.18
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.54
81 TestFunctional/serial/CacheCmd/cache/delete 0.1
82 TestFunctional/serial/MinikubeKubectlCmd 0.11
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
84 TestFunctional/serial/ExtraConfig 67.3
85 TestFunctional/serial/ComponentHealth 0.06
86 TestFunctional/serial/LogsCmd 1.23
87 TestFunctional/serial/LogsFileCmd 1.23
88 TestFunctional/serial/InvalidService 3.89
90 TestFunctional/parallel/ConfigCmd 0.35
91 TestFunctional/parallel/DashboardCmd 6.93
92 TestFunctional/parallel/DryRun 0.4
93 TestFunctional/parallel/InternationalLanguage 0.17
94 TestFunctional/parallel/StatusCmd 1.01
99 TestFunctional/parallel/AddonsCmd 0.14
100 TestFunctional/parallel/PersistentVolumeClaim 24.09
102 TestFunctional/parallel/SSHCmd 0.6
103 TestFunctional/parallel/CpCmd 1.72
104 TestFunctional/parallel/MySQL 16.05
105 TestFunctional/parallel/FileSync 0.26
106 TestFunctional/parallel/CertSync 1.73
110 TestFunctional/parallel/NodeLabels 0.08
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.57
114 TestFunctional/parallel/License 0.43
116 TestFunctional/parallel/Version/short 0.05
117 TestFunctional/parallel/Version/components 0.54
118 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
119 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
120 TestFunctional/parallel/ImageCommands/ImageListJson 0.32
121 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
122 TestFunctional/parallel/ImageCommands/ImageBuild 2.14
123 TestFunctional/parallel/ImageCommands/Setup 1.01
125 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
126 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
127 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
129 TestFunctional/parallel/ProfileCmd/profile_not_create 0.47
130 TestFunctional/parallel/MountCmd/any-port 5.76
131 TestFunctional/parallel/ProfileCmd/profile_list 0.39
133 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
135 TestFunctional/parallel/ImageCommands/ImageRemove 0.55
138 TestFunctional/parallel/MountCmd/specific-port 1.67
139 TestFunctional/parallel/MountCmd/VerifyCleanup 1.6
141 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.43
142 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
144 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.22
145 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
146 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
150 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
151 TestFunctional/parallel/ServiceCmd/List 1.7
152 TestFunctional/parallel/ServiceCmd/JSONOutput 1.7
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 147.53
164 TestMultiControlPlane/serial/DeployApp 5.23
165 TestMultiControlPlane/serial/PingHostFromPods 0.97
166 TestMultiControlPlane/serial/AddWorkerNode 25.08
167 TestMultiControlPlane/serial/NodeLabels 0.07
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.9
169 TestMultiControlPlane/serial/CopyFile 16.73
170 TestMultiControlPlane/serial/StopSecondaryNode 14.25
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.72
172 TestMultiControlPlane/serial/RestartSecondaryNode 8.8
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.9
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 98.83
175 TestMultiControlPlane/serial/DeleteSecondaryNode 10.61
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.69
177 TestMultiControlPlane/serial/StopCluster 41.64
178 TestMultiControlPlane/serial/RestartCluster 56.92
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.69
180 TestMultiControlPlane/serial/AddSecondaryNode 41.43
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.9
185 TestJSONOutput/start/Command 37.88
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 7.96
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.22
210 TestKicCustomNetwork/create_custom_network 32.23
211 TestKicCustomNetwork/use_default_bridge_network 23.48
212 TestKicExistingNetwork 24.8
213 TestKicCustomSubnet 24.22
214 TestKicStaticIP 24.27
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 48.15
219 TestMountStart/serial/StartWithMountFirst 5.84
220 TestMountStart/serial/VerifyMountFirst 0.26
221 TestMountStart/serial/StartWithMountSecond 8.26
222 TestMountStart/serial/VerifyMountSecond 0.26
223 TestMountStart/serial/DeleteFirst 1.7
224 TestMountStart/serial/VerifyMountPostDelete 0.26
225 TestMountStart/serial/Stop 1.25
226 TestMountStart/serial/RestartStopped 7.08
227 TestMountStart/serial/VerifyMountPostStop 0.26
230 TestMultiNode/serial/FreshStart2Nodes 93.12
231 TestMultiNode/serial/DeployApp2Nodes 3.45
232 TestMultiNode/serial/PingHostFrom2Pods 0.67
233 TestMultiNode/serial/AddNode 27.29
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.64
236 TestMultiNode/serial/CopyFile 9.51
237 TestMultiNode/serial/StopNode 2.24
238 TestMultiNode/serial/StartAfterStop 7.14
239 TestMultiNode/serial/RestartKeepsNodes 55.68
240 TestMultiNode/serial/DeleteNode 5.01
241 TestMultiNode/serial/StopMultiNode 17.51
242 TestMultiNode/serial/RestartMultiNode 26.85
243 TestMultiNode/serial/ValidateNameConflict 24.92
248 TestPreload 100.21
250 TestScheduledStopUnix 97.43
253 TestInsufficientStorage 10.2
254 TestRunningBinaryUpgrade 54.5
256 TestKubernetesUpgrade 316.48
257 TestMissingContainerUpgrade 100.22
265 TestNetworkPlugins/group/false 11.11
276 TestStoppedBinaryUpgrade/Setup 0.42
277 TestStoppedBinaryUpgrade/Upgrade 47.08
278 TestStoppedBinaryUpgrade/MinikubeLogs 0.99
280 TestPause/serial/Start 71.99
282 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
283 TestNoKubernetes/serial/StartWithK8s 22.32
284 TestNoKubernetes/serial/StartWithStopK8s 17.2
285 TestNoKubernetes/serial/Start 4.69
286 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
287 TestNoKubernetes/serial/ProfileList 1.86
288 TestNoKubernetes/serial/Stop 1.26
289 TestPause/serial/SecondStartNoReconfiguration 6.01
290 TestNoKubernetes/serial/StartNoArgs 6.75
292 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.3
293 TestNetworkPlugins/group/auto/Start 43.42
294 TestNetworkPlugins/group/kindnet/Start 40.44
295 TestNetworkPlugins/group/auto/KubeletFlags 0.28
296 TestNetworkPlugins/group/auto/NetCatPod 9.25
297 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
298 TestNetworkPlugins/group/kindnet/KubeletFlags 0.28
299 TestNetworkPlugins/group/kindnet/NetCatPod 9.19
300 TestNetworkPlugins/group/auto/DNS 0.13
301 TestNetworkPlugins/group/auto/Localhost 0.09
302 TestNetworkPlugins/group/auto/HairPin 0.09
303 TestNetworkPlugins/group/kindnet/DNS 0.12
304 TestNetworkPlugins/group/kindnet/Localhost 0.1
305 TestNetworkPlugins/group/kindnet/HairPin 0.1
306 TestNetworkPlugins/group/calico/Start 47.61
307 TestNetworkPlugins/group/custom-flannel/Start 51.57
308 TestNetworkPlugins/group/bridge/Start 41.8
309 TestNetworkPlugins/group/calico/ControllerPod 6.01
310 TestNetworkPlugins/group/flannel/Start 55.24
311 TestNetworkPlugins/group/calico/KubeletFlags 0.4
312 TestNetworkPlugins/group/calico/NetCatPod 9.07
313 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.34
314 TestNetworkPlugins/group/custom-flannel/NetCatPod 8.21
315 TestNetworkPlugins/group/calico/DNS 0.18
316 TestNetworkPlugins/group/calico/Localhost 0.15
317 TestNetworkPlugins/group/calico/HairPin 0.13
318 TestNetworkPlugins/group/custom-flannel/DNS 0.16
319 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
320 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
321 TestNetworkPlugins/group/bridge/KubeletFlags 0.35
322 TestNetworkPlugins/group/bridge/NetCatPod 10.24
323 TestNetworkPlugins/group/enable-default-cni/Start 69.74
324 TestNetworkPlugins/group/bridge/DNS 0.14
325 TestNetworkPlugins/group/bridge/Localhost 0.1
326 TestNetworkPlugins/group/bridge/HairPin 0.1
328 TestStartStop/group/old-k8s-version/serial/FirstStart 51.59
329 TestNetworkPlugins/group/flannel/ControllerPod 6
331 TestStartStop/group/no-preload/serial/FirstStart 54.59
332 TestNetworkPlugins/group/flannel/KubeletFlags 0.36
333 TestNetworkPlugins/group/flannel/NetCatPod 8.48
334 TestNetworkPlugins/group/flannel/DNS 0.13
335 TestNetworkPlugins/group/flannel/Localhost 0.1
336 TestNetworkPlugins/group/flannel/HairPin 0.11
338 TestStartStop/group/embed-certs/serial/FirstStart 71.58
339 TestStartStop/group/old-k8s-version/serial/DeployApp 10.26
341 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
342 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.22
343 TestStartStop/group/old-k8s-version/serial/Stop 16.03
344 TestStartStop/group/no-preload/serial/DeployApp 7.27
345 TestNetworkPlugins/group/enable-default-cni/DNS 0.11
346 TestNetworkPlugins/group/enable-default-cni/Localhost 0.09
347 TestNetworkPlugins/group/enable-default-cni/HairPin 0.1
349 TestStartStop/group/no-preload/serial/Stop 18.13
350 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
351 TestStartStop/group/old-k8s-version/serial/SecondStart 44.69
353 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 46.6
354 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
355 TestStartStop/group/no-preload/serial/SecondStart 52.46
356 TestStartStop/group/embed-certs/serial/DeployApp 8.23
357 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
359 TestStartStop/group/embed-certs/serial/Stop 16.91
360 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
361 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
363 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.26
365 TestStartStop/group/newest-cni/serial/FirstStart 29.52
367 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
368 TestStartStop/group/embed-certs/serial/SecondStart 49.07
369 TestStartStop/group/default-k8s-diff-port/serial/Stop 17.05
370 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
371 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
372 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
374 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.24
375 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 51.69
376 TestStartStop/group/newest-cni/serial/DeployApp 0
378 TestStartStop/group/newest-cni/serial/Stop 13.25
379 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
380 TestStartStop/group/newest-cni/serial/SecondStart 10.88
381 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
382 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
383 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
384 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
386 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
387 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
389 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
390 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
391 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
x
+
TestDownloadOnly/v1.28.0/json-events (4.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-746820 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-746820 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.280086682s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (4.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1018 08:29:16.990255    9394 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1018 08:29:16.990361    9394 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-5897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-746820
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-746820: exit status 85 (64.182115ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-746820 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-746820 │ jenkins │ v1.37.0 │ 18 Oct 25 08:29 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 08:29:12
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 08:29:12.752294    9405 out.go:360] Setting OutFile to fd 1 ...
	I1018 08:29:12.752561    9405 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:29:12.752572    9405 out.go:374] Setting ErrFile to fd 2...
	I1018 08:29:12.752576    9405 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:29:12.752754    9405 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-5897/.minikube/bin
	W1018 08:29:12.752898    9405 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21767-5897/.minikube/config/config.json: open /home/jenkins/minikube-integration/21767-5897/.minikube/config/config.json: no such file or directory
	I1018 08:29:12.753378    9405 out.go:368] Setting JSON to true
	I1018 08:29:12.754251    9405 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":701,"bootTime":1760775452,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 08:29:12.754337    9405 start.go:141] virtualization: kvm guest
	I1018 08:29:12.756609    9405 out.go:99] [download-only-746820] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 08:29:12.756740    9405 notify.go:220] Checking for updates...
	W1018 08:29:12.756736    9405 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21767-5897/.minikube/cache/preloaded-tarball: no such file or directory
	I1018 08:29:12.758020    9405 out.go:171] MINIKUBE_LOCATION=21767
	I1018 08:29:12.759433    9405 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 08:29:12.760712    9405 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21767-5897/kubeconfig
	I1018 08:29:12.761973    9405 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-5897/.minikube
	I1018 08:29:12.763194    9405 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1018 08:29:12.765375    9405 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1018 08:29:12.765610    9405 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 08:29:12.794080    9405 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 08:29:12.794224    9405 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 08:29:13.208181    9405 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-10-18 08:29:13.197643522 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 08:29:13.208284    9405 docker.go:318] overlay module found
	I1018 08:29:13.210118    9405 out.go:99] Using the docker driver based on user configuration
	I1018 08:29:13.210148    9405 start.go:305] selected driver: docker
	I1018 08:29:13.210154    9405 start.go:925] validating driver "docker" against <nil>
	I1018 08:29:13.210223    9405 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 08:29:13.272927    9405 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-10-18 08:29:13.263421437 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 08:29:13.273074    9405 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 08:29:13.273594    9405 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1018 08:29:13.273758    9405 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1018 08:29:13.275550    9405 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-746820 host does not exist
	  To start a cluster, run: "minikube start -p download-only-746820"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-746820
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (4.69s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-330759 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-330759 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.693316996s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (4.69s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1018 08:29:22.104148    9394 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1018 08:29:22.104195    9394 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-5897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-330759
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-330759: exit status 85 (63.078975ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-746820 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-746820 │ jenkins │ v1.37.0 │ 18 Oct 25 08:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 18 Oct 25 08:29 UTC │ 18 Oct 25 08:29 UTC │
	│ delete  │ -p download-only-746820                                                                                                                                                   │ download-only-746820 │ jenkins │ v1.37.0 │ 18 Oct 25 08:29 UTC │ 18 Oct 25 08:29 UTC │
	│ start   │ -o=json --download-only -p download-only-330759 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-330759 │ jenkins │ v1.37.0 │ 18 Oct 25 08:29 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 08:29:17
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 08:29:17.451157    9762 out.go:360] Setting OutFile to fd 1 ...
	I1018 08:29:17.451396    9762 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:29:17.451405    9762 out.go:374] Setting ErrFile to fd 2...
	I1018 08:29:17.451408    9762 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:29:17.451600    9762 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-5897/.minikube/bin
	I1018 08:29:17.452048    9762 out.go:368] Setting JSON to true
	I1018 08:29:17.452877    9762 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":705,"bootTime":1760775452,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 08:29:17.452962    9762 start.go:141] virtualization: kvm guest
	I1018 08:29:17.455121    9762 out.go:99] [download-only-330759] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 08:29:17.455274    9762 notify.go:220] Checking for updates...
	I1018 08:29:17.456814    9762 out.go:171] MINIKUBE_LOCATION=21767
	I1018 08:29:17.458475    9762 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 08:29:17.459619    9762 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21767-5897/kubeconfig
	I1018 08:29:17.460863    9762 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-5897/.minikube
	I1018 08:29:17.462114    9762 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1018 08:29:17.464676    9762 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1018 08:29:17.464881    9762 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 08:29:17.488083    9762 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 08:29:17.488211    9762 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 08:29:17.550721    9762 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:52 SystemTime:2025-10-18 08:29:17.540083079 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 08:29:17.550822    9762 docker.go:318] overlay module found
	I1018 08:29:17.552554    9762 out.go:99] Using the docker driver based on user configuration
	I1018 08:29:17.552593    9762 start.go:305] selected driver: docker
	I1018 08:29:17.552602    9762 start.go:925] validating driver "docker" against <nil>
	I1018 08:29:17.552698    9762 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 08:29:17.608938    9762 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:52 SystemTime:2025-10-18 08:29:17.598262412 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 08:29:17.609128    9762 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 08:29:17.609893    9762 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1018 08:29:17.610086    9762 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1018 08:29:17.611807    9762 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-330759 host does not exist
	  To start a cluster, run: "minikube start -p download-only-330759"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-330759
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.39s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-215465 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-215465" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-215465
--- PASS: TestDownloadOnlyKic (0.39s)

                                                
                                    
x
+
TestBinaryMirror (0.81s)

                                                
                                                
=== RUN   TestBinaryMirror
I1018 08:29:23.189241    9394 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-658787 --alsologtostderr --binary-mirror http://127.0.0.1:33031 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-658787" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-658787
--- PASS: TestBinaryMirror (0.81s)

                                                
                                    
x
+
TestOffline (87.96s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-179679 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-179679 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m25.501390315s)
helpers_test.go:175: Cleaning up "offline-crio-179679" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-179679
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-179679: (2.453755887s)
--- PASS: TestOffline (87.96s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-757656
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-757656: exit status 85 (55.175218ms)

                                                
                                                
-- stdout --
	* Profile "addons-757656" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-757656"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-757656
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-757656: exit status 85 (53.281403ms)

                                                
                                                
-- stdout --
	* Profile "addons-757656" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-757656"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (144.97s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-757656 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-757656 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m24.968611483s)
--- PASS: TestAddons/Setup (144.97s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-757656 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-757656 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (7.45s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-757656 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-757656 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [a3b7e68b-5d74-4895-bd87-cf2c9aaf93c1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [a3b7e68b-5d74-4895-bd87-cf2c9aaf93c1] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 7.004119605s
addons_test.go:694: (dbg) Run:  kubectl --context addons-757656 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-757656 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-757656 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (7.45s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (16.72s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-757656
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-757656: (16.462783048s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-757656
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-757656
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-757656
--- PASS: TestAddons/StoppedEnableDisable (16.72s)

                                                
                                    
x
+
TestCertOptions (26.58s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-043492 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-043492 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (23.382389987s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-043492 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-043492 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-043492 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-043492" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-043492
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-043492: (2.506399826s)
--- PASS: TestCertOptions (26.58s)

                                                
                                    
x
+
TestCertExpiration (214.42s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-558693 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-558693 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (23.496729665s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-558693 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-558693 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (8.344366849s)
helpers_test.go:175: Cleaning up "cert-expiration-558693" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-558693
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-558693: (2.577192777s)
--- PASS: TestCertExpiration (214.42s)

                                                
                                    
x
+
TestForceSystemdFlag (28.93s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-619251 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-619251 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (26.117219971s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-619251 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-619251" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-619251
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-619251: (2.510219003s)
--- PASS: TestForceSystemdFlag (28.93s)

                                                
                                    
x
+
TestForceSystemdEnv (25.49s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-980759 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-980759 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (22.999450948s)
helpers_test.go:175: Cleaning up "force-systemd-env-980759" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-980759
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-980759: (2.48944907s)
--- PASS: TestForceSystemdEnv (25.49s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.08s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1018 09:10:09.201861    9394 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1018 09:10:09.202029    9394 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate4228820562/001:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1018 09:10:09.232496    9394 install.go:163] /tmp/TestKVMDriverInstallOrUpdate4228820562/001/docker-machine-driver-kvm2 version is 1.1.1
W1018 09:10:09.232539    9394 install.go:76] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.37.0
W1018 09:10:09.232674    9394 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1018 09:10:09.232732    9394 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate4228820562/001/docker-machine-driver-kvm2
I1018 09:10:10.134009    9394 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate4228820562/001:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1018 09:10:10.153000    9394 install.go:163] /tmp/TestKVMDriverInstallOrUpdate4228820562/001/docker-machine-driver-kvm2 version is 1.37.0
--- PASS: TestKVMDriverInstallOrUpdate (1.08s)

                                                
                                    
x
+
TestErrorSpam/setup (20.92s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-857504 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-857504 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-857504 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-857504 --driver=docker  --container-runtime=crio: (20.916274161s)
--- PASS: TestErrorSpam/setup (20.92s)

                                                
                                    
x
+
TestErrorSpam/start (0.64s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-857504 --log_dir /tmp/nospam-857504 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-857504 --log_dir /tmp/nospam-857504 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-857504 --log_dir /tmp/nospam-857504 start --dry-run
--- PASS: TestErrorSpam/start (0.64s)

                                                
                                    
x
+
TestErrorSpam/status (0.93s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-857504 --log_dir /tmp/nospam-857504 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-857504 --log_dir /tmp/nospam-857504 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-857504 --log_dir /tmp/nospam-857504 status
--- PASS: TestErrorSpam/status (0.93s)

                                                
                                    
x
+
TestErrorSpam/pause (6.44s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-857504 --log_dir /tmp/nospam-857504 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-857504 --log_dir /tmp/nospam-857504 pause: exit status 80 (2.023258728s)

                                                
                                                
-- stdout --
	* Pausing node nospam-857504 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:35:24Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-857504 --log_dir /tmp/nospam-857504 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-857504 --log_dir /tmp/nospam-857504 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-857504 --log_dir /tmp/nospam-857504 pause: exit status 80 (2.292909692s)

                                                
                                                
-- stdout --
	* Pausing node nospam-857504 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:35:26Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-857504 --log_dir /tmp/nospam-857504 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-857504 --log_dir /tmp/nospam-857504 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-857504 --log_dir /tmp/nospam-857504 pause: exit status 80 (2.124774525s)

                                                
                                                
-- stdout --
	* Pausing node nospam-857504 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:35:28Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-857504 --log_dir /tmp/nospam-857504 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.44s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.61s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-857504 --log_dir /tmp/nospam-857504 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-857504 --log_dir /tmp/nospam-857504 unpause: exit status 80 (1.793930698s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-857504 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:35:30Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-857504 --log_dir /tmp/nospam-857504 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-857504 --log_dir /tmp/nospam-857504 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-857504 --log_dir /tmp/nospam-857504 unpause: exit status 80 (1.999721331s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-857504 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:35:32Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-857504 --log_dir /tmp/nospam-857504 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-857504 --log_dir /tmp/nospam-857504 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-857504 --log_dir /tmp/nospam-857504 unpause: exit status 80 (1.814428447s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-857504 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:35:34Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-857504 --log_dir /tmp/nospam-857504 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.61s)

                                                
                                    
x
+
TestErrorSpam/stop (2.6s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-857504 --log_dir /tmp/nospam-857504 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-857504 --log_dir /tmp/nospam-857504 stop: (2.411531865s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-857504 --log_dir /tmp/nospam-857504 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-857504 --log_dir /tmp/nospam-857504 stop
--- PASS: TestErrorSpam/stop (2.60s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21767-5897/.minikube/files/etc/test/nested/copy/9394/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (37.13s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-897534 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-897534 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (37.131335532s)
--- PASS: TestFunctional/serial/StartWithProxy (37.13s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.24s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1018 08:36:18.867475    9394 config.go:182] Loaded profile config "functional-897534": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-897534 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-897534 --alsologtostderr -v=8: (6.234212368s)
functional_test.go:678: soft start took 6.235083906s for "functional-897534" cluster.
I1018 08:36:25.102114    9394 config.go:182] Loaded profile config "functional-897534": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (6.24s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-897534 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.77s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.77s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-897534 /tmp/TestFunctionalserialCacheCmdcacheadd_local816961066/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 cache add minikube-local-cache-test:functional-897534
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 cache delete minikube-local-cache-test:functional-897534
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-897534
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.54s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-897534 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (275.004087ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.54s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 kubectl -- --context functional-897534 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-897534 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (67.3s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-897534 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1018 08:36:49.586994    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:36:49.600972    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:36:49.612481    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:36:49.633875    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:36:49.675268    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:36:49.756740    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:36:49.918251    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:36:50.239920    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:36:50.881621    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:36:52.163233    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:36:54.726149    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:36:59.847834    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:37:10.089598    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:37:30.571570    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-897534 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m7.304311417s)
functional_test.go:776: restart took 1m7.304429754s for "functional-897534" cluster.
I1018 08:37:38.699224    9394 config.go:182] Loaded profile config "functional-897534": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (67.30s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-897534 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.23s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-897534 logs: (1.225060051s)
--- PASS: TestFunctional/serial/LogsCmd (1.23s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.23s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 logs --file /tmp/TestFunctionalserialLogsFileCmd1275459128/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-897534 logs --file /tmp/TestFunctionalserialLogsFileCmd1275459128/001/logs.txt: (1.232339856s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.23s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.89s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-897534 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-897534
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-897534: exit status 115 (338.955377ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30511 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-897534 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.89s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-897534 config get cpus: exit status 14 (73.266537ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-897534 config get cpus: exit status 14 (51.300318ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (6.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-897534 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-897534 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 45638: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (6.93s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-897534 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-897534 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (167.977979ms)

                                                
                                                
-- stdout --
	* [functional-897534] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21767
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21767-5897/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-5897/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 08:37:49.177766   44692 out.go:360] Setting OutFile to fd 1 ...
	I1018 08:37:49.178112   44692 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:37:49.178125   44692 out.go:374] Setting ErrFile to fd 2...
	I1018 08:37:49.178132   44692 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:37:49.178586   44692 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-5897/.minikube/bin
	I1018 08:37:49.179477   44692 out.go:368] Setting JSON to false
	I1018 08:37:49.180482   44692 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1217,"bootTime":1760775452,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 08:37:49.180572   44692 start.go:141] virtualization: kvm guest
	I1018 08:37:49.182551   44692 out.go:179] * [functional-897534] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 08:37:49.183751   44692 out.go:179]   - MINIKUBE_LOCATION=21767
	I1018 08:37:49.183774   44692 notify.go:220] Checking for updates...
	I1018 08:37:49.186582   44692 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 08:37:49.187990   44692 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-5897/kubeconfig
	I1018 08:37:49.192634   44692 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-5897/.minikube
	I1018 08:37:49.193859   44692 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 08:37:49.194969   44692 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 08:37:49.196611   44692 config.go:182] Loaded profile config "functional-897534": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:37:49.197071   44692 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 08:37:49.224410   44692 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 08:37:49.224536   44692 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 08:37:49.285147   44692 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:57 SystemTime:2025-10-18 08:37:49.274081171 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 08:37:49.285248   44692 docker.go:318] overlay module found
	I1018 08:37:49.286963   44692 out.go:179] * Using the docker driver based on existing profile
	I1018 08:37:49.288396   44692 start.go:305] selected driver: docker
	I1018 08:37:49.288413   44692 start.go:925] validating driver "docker" against &{Name:functional-897534 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-897534 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 08:37:49.288527   44692 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 08:37:49.290387   44692 out.go:203] 
	W1018 08:37:49.291639   44692 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1018 08:37:49.292867   44692 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-897534 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-897534 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-897534 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (167.995238ms)

                                                
                                                
-- stdout --
	* [functional-897534] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21767
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21767-5897/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-5897/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 08:37:49.582893   45003 out.go:360] Setting OutFile to fd 1 ...
	I1018 08:37:49.583012   45003 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:37:49.583025   45003 out.go:374] Setting ErrFile to fd 2...
	I1018 08:37:49.583032   45003 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:37:49.583367   45003 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-5897/.minikube/bin
	I1018 08:37:49.583835   45003 out.go:368] Setting JSON to false
	I1018 08:37:49.584818   45003 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1218,"bootTime":1760775452,"procs":226,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 08:37:49.584915   45003 start.go:141] virtualization: kvm guest
	I1018 08:37:49.586603   45003 out.go:179] * [functional-897534] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1018 08:37:49.588275   45003 out.go:179]   - MINIKUBE_LOCATION=21767
	I1018 08:37:49.588274   45003 notify.go:220] Checking for updates...
	I1018 08:37:49.590562   45003 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 08:37:49.591722   45003 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-5897/kubeconfig
	I1018 08:37:49.592855   45003 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-5897/.minikube
	I1018 08:37:49.593886   45003 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 08:37:49.594998   45003 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 08:37:49.597293   45003 config.go:182] Loaded profile config "functional-897534": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:37:49.597834   45003 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 08:37:49.623432   45003 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 08:37:49.623558   45003 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 08:37:49.689627   45003 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-18 08:37:49.678250805 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 08:37:49.689717   45003 docker.go:318] overlay module found
	I1018 08:37:49.691336   45003 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1018 08:37:49.692462   45003 start.go:305] selected driver: docker
	I1018 08:37:49.692478   45003 start.go:925] validating driver "docker" against &{Name:functional-897534 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-897534 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 08:37:49.692598   45003 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 08:37:49.694691   45003 out.go:203] 
	W1018 08:37:49.695837   45003 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1018 08:37:49.696930   45003 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [e974cd76-a73e-4392-a75d-93099405bd56] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003728874s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-897534 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-897534 apply -f testdata/storage-provisioner/pvc.yaml
2025/10/18 08:37:56 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-897534 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-897534 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [190a453b-c7cd-4027-8d54-d28ff01f6195] Pending
helpers_test.go:352: "sp-pod" [190a453b-c7cd-4027-8d54-d28ff01f6195] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [190a453b-c7cd-4027-8d54-d28ff01f6195] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.004026268s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-897534 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-897534 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-897534 apply -f testdata/storage-provisioner/pod.yaml
I1018 08:38:07.317368    9394 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [b2b0c9a2-f94c-49ea-b73a-1d350bb4a87c] Pending
helpers_test.go:352: "sp-pod" [b2b0c9a2-f94c-49ea-b73a-1d350bb4a87c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [b2b0c9a2-f94c-49ea-b73a-1d350bb4a87c] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003347221s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-897534 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.09s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 ssh "echo hello"
I1018 08:37:56.719358    9394 detect.go:223] nested VM detected
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 ssh -n functional-897534 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 cp functional-897534:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2584049926/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 ssh -n functional-897534 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 ssh -n functional-897534 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (16.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-897534 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-g46jc" [00429dba-2e27-4bbe-907b-08226f7d820c] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
E1018 08:38:11.533755    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "mysql-5bb876957f-g46jc" [00429dba-2e27-4bbe-907b-08226f7d820c] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 13.004159989s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-897534 exec mysql-5bb876957f-g46jc -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-897534 exec mysql-5bb876957f-g46jc -- mysql -ppassword -e "show databases;": exit status 1 (89.563607ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1018 08:38:22.217512    9394 retry.go:31] will retry after 515.760913ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-897534 exec mysql-5bb876957f-g46jc -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-897534 exec mysql-5bb876957f-g46jc -- mysql -ppassword -e "show databases;": exit status 1 (90.013986ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1018 08:38:22.823798    9394 retry.go:31] will retry after 2.096091998s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-897534 exec mysql-5bb876957f-g46jc -- mysql -ppassword -e "show databases;"
E1018 08:39:33.455425    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:41:49.586723    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:42:17.297543    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:46:49.586678    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/MySQL (16.05s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/9394/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 ssh "sudo cat /etc/test/nested/copy/9394/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/9394.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 ssh "sudo cat /etc/ssl/certs/9394.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/9394.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 ssh "sudo cat /usr/share/ca-certificates/9394.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/93942.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 ssh "sudo cat /etc/ssl/certs/93942.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/93942.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 ssh "sudo cat /usr/share/ca-certificates/93942.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-897534 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-897534 ssh "sudo systemctl is-active docker": exit status 1 (291.502456ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-897534 ssh "sudo systemctl is-active containerd": exit status 1 (278.787295ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-897534 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-897534 image ls --format short --alsologtostderr:
I1018 08:38:16.477193   49815 out.go:360] Setting OutFile to fd 1 ...
I1018 08:38:16.477432   49815 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 08:38:16.477440   49815 out.go:374] Setting ErrFile to fd 2...
I1018 08:38:16.477443   49815 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 08:38:16.477627   49815 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-5897/.minikube/bin
I1018 08:38:16.478182   49815 config.go:182] Loaded profile config "functional-897534": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 08:38:16.478268   49815 config.go:182] Loaded profile config "functional-897534": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 08:38:16.478603   49815 cli_runner.go:164] Run: docker container inspect functional-897534 --format={{.State.Status}}
I1018 08:38:16.500196   49815 ssh_runner.go:195] Run: systemctl --version
I1018 08:38:16.500268   49815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-897534
I1018 08:38:16.518433   49815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/functional-897534/id_rsa Username:docker}
I1018 08:38:16.615318   49815 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-897534 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ docker.io/library/nginx                 │ alpine             │ 5e7abcdd20216 │ 54.2MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ docker.io/library/nginx                 │ latest             │ 07ccdb7838758 │ 164MB  │
│ localhost/my-image                      │ functional-897534  │ 50bb56a6a9129 │ 1.47MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-897534 image ls --format table --alsologtostderr:
I1018 08:38:19.398931   50542 out.go:360] Setting OutFile to fd 1 ...
I1018 08:38:19.399238   50542 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 08:38:19.399251   50542 out.go:374] Setting ErrFile to fd 2...
I1018 08:38:19.399256   50542 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 08:38:19.399665   50542 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-5897/.minikube/bin
I1018 08:38:19.400530   50542 config.go:182] Loaded profile config "functional-897534": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 08:38:19.400693   50542 config.go:182] Loaded profile config "functional-897534": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 08:38:19.401203   50542 cli_runner.go:164] Run: docker container inspect functional-897534 --format={{.State.Status}}
I1018 08:38:19.421249   50542 ssh_runner.go:195] Run: systemctl --version
I1018 08:38:19.421321   50542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-897534
I1018 08:38:19.441418   50542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/functional-897534/id_rsa Username:docker}
I1018 08:38:19.540898   50542 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-897534 image ls --format json --alsologtostderr:
[{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118
e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha
256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"50bb56a6a9129f781fee7ab53aad451b1ff0f4bff89
5dfd18151846d80e4c720","repoDigests":["localhost/my-image@sha256:56e270f87907bcce0e685be2be7b304522e50b986eda0e868583e7134c860046"],"repoTags":["localhost/my-image:functional-897534"],"size":"1468744"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"8efbaa56344747bce76aa28b9c3dc2f8d78da955ce2142c10b0b466ccbf7dacc","repoDigests":["docker.io/library/dab282a72e1ab82de73e57722e782efd65c88e134732b2f5d29ad97cd982b878-tmp@sha256:e52ecb8b7fb3e4b12a
0039ec16cf4cee0ea66d9849c3731447feb03dfdf9e625"],"repoTags":[],"size":"1466132"},{"id":"07ccdb7838758e758a4d52a9761636c385125a327355c0c94a6acff9babff938","repoDigests":["docker.io/library/nginx@sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115","docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6"],"repoTags":["docker.io/library/nginx:latest"],"size":"163615579"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/st
orage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/d
ashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5","repoDigests":["docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22","docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54168570"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-897534 image ls --format json --alsologtostderr:
I1018 08:38:19.078198   50488 out.go:360] Setting OutFile to fd 1 ...
I1018 08:38:19.078471   50488 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 08:38:19.078482   50488 out.go:374] Setting ErrFile to fd 2...
I1018 08:38:19.078486   50488 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 08:38:19.078721   50488 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-5897/.minikube/bin
I1018 08:38:19.079274   50488 config.go:182] Loaded profile config "functional-897534": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 08:38:19.079388   50488 config.go:182] Loaded profile config "functional-897534": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 08:38:19.079786   50488 cli_runner.go:164] Run: docker container inspect functional-897534 --format={{.State.Status}}
I1018 08:38:19.099590   50488 ssh_runner.go:195] Run: systemctl --version
I1018 08:38:19.099651   50488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-897534
I1018 08:38:19.118832   50488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/functional-897534/id_rsa Username:docker}
I1018 08:38:19.215538   50488 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-897534 image ls --format yaml --alsologtostderr:
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 50bb56a6a9129f781fee7ab53aad451b1ff0f4bff895dfd18151846d80e4c720
repoDigests:
- localhost/my-image@sha256:56e270f87907bcce0e685be2be7b304522e50b986eda0e868583e7134c860046
repoTags:
- localhost/my-image:functional-897534
size: "1468744"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee
- gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
repoTags:
- gcr.io/k8s-minikube/busybox:latest
size: "1462480"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 8efbaa56344747bce76aa28b9c3dc2f8d78da955ce2142c10b0b466ccbf7dacc
repoDigests:
- docker.io/library/dab282a72e1ab82de73e57722e782efd65c88e134732b2f5d29ad97cd982b878-tmp@sha256:e52ecb8b7fb3e4b12a0039ec16cf4cee0ea66d9849c3731447feb03dfdf9e625
repoTags: []
size: "1466132"
- id: 07ccdb7838758e758a4d52a9761636c385125a327355c0c94a6acff9babff938
repoDigests:
- docker.io/library/nginx@sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115
- docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6
repoTags:
- docker.io/library/nginx:latest
size: "163615579"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: 5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5
repoDigests:
- docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22
- docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e
repoTags:
- docker.io/library/nginx:alpine
size: "54168570"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-897534 image ls --format yaml --alsologtostderr:
I1018 08:38:18.839663   50434 out.go:360] Setting OutFile to fd 1 ...
I1018 08:38:18.839917   50434 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 08:38:18.839927   50434 out.go:374] Setting ErrFile to fd 2...
I1018 08:38:18.839931   50434 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 08:38:18.840197   50434 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-5897/.minikube/bin
I1018 08:38:18.840815   50434 config.go:182] Loaded profile config "functional-897534": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 08:38:18.840925   50434 config.go:182] Loaded profile config "functional-897534": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 08:38:18.841370   50434 cli_runner.go:164] Run: docker container inspect functional-897534 --format={{.State.Status}}
I1018 08:38:18.860416   50434 ssh_runner.go:195] Run: systemctl --version
I1018 08:38:18.860458   50434 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-897534
I1018 08:38:18.879441   50434 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/functional-897534/id_rsa Username:docker}
I1018 08:38:18.976803   50434 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-897534 ssh pgrep buildkitd: exit status 1 (262.452879ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 image build -t localhost/my-image:functional-897534 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-897534 image build -t localhost/my-image:functional-897534 testdata/build --alsologtostderr: (1.654697272s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-897534 image build -t localhost/my-image:functional-897534 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 8efbaa56344
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-897534
--> 50bb56a6a91
Successfully tagged localhost/my-image:functional-897534
50bb56a6a9129f781fee7ab53aad451b1ff0f4bff895dfd18151846d80e4c720
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-897534 image build -t localhost/my-image:functional-897534 testdata/build --alsologtostderr:
I1018 08:38:16.961834   49974 out.go:360] Setting OutFile to fd 1 ...
I1018 08:38:16.962121   49974 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 08:38:16.962132   49974 out.go:374] Setting ErrFile to fd 2...
I1018 08:38:16.962136   49974 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 08:38:16.962316   49974 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-5897/.minikube/bin
I1018 08:38:16.962919   49974 config.go:182] Loaded profile config "functional-897534": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 08:38:16.963540   49974 config.go:182] Loaded profile config "functional-897534": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 08:38:16.963926   49974 cli_runner.go:164] Run: docker container inspect functional-897534 --format={{.State.Status}}
I1018 08:38:16.982038   49974 ssh_runner.go:195] Run: systemctl --version
I1018 08:38:16.982100   49974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-897534
I1018 08:38:17.000794   49974 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/functional-897534/id_rsa Username:docker}
I1018 08:38:17.096058   49974 build_images.go:161] Building image from path: /tmp/build.3970885709.tar
I1018 08:38:17.096133   49974 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1018 08:38:17.104744   49974 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3970885709.tar
I1018 08:38:17.108554   49974 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3970885709.tar: stat -c "%s %y" /var/lib/minikube/build/build.3970885709.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3970885709.tar': No such file or directory
I1018 08:38:17.108583   49974 ssh_runner.go:362] scp /tmp/build.3970885709.tar --> /var/lib/minikube/build/build.3970885709.tar (3072 bytes)
I1018 08:38:17.127374   49974 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3970885709
I1018 08:38:17.135457   49974 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3970885709 -xf /var/lib/minikube/build/build.3970885709.tar
I1018 08:38:17.143828   49974 crio.go:315] Building image: /var/lib/minikube/build/build.3970885709
I1018 08:38:17.143907   49974 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-897534 /var/lib/minikube/build/build.3970885709 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1018 08:38:18.546521   49974 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-897534 /var/lib/minikube/build/build.3970885709 --cgroup-manager=cgroupfs: (1.402570595s)
I1018 08:38:18.546626   49974 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3970885709
I1018 08:38:18.555602   49974 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3970885709.tar
I1018 08:38:18.563740   49974 build_images.go:217] Built localhost/my-image:functional-897534 from /tmp/build.3970885709.tar
I1018 08:38:18.563778   49974 build_images.go:133] succeeded building to: functional-897534
I1018 08:38:18.563785   49974 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-897534
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-897534 /tmp/TestFunctionalparallelMountCmdany-port3700370279/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1760776667849927314" to /tmp/TestFunctionalparallelMountCmdany-port3700370279/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1760776667849927314" to /tmp/TestFunctionalparallelMountCmdany-port3700370279/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1760776667849927314" to /tmp/TestFunctionalparallelMountCmdany-port3700370279/001/test-1760776667849927314
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-897534 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (314.68103ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1018 08:37:48.164926    9394 retry.go:31] will retry after 275.144639ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 18 08:37 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 18 08:37 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 18 08:37 test-1760776667849927314
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 ssh cat /mount-9p/test-1760776667849927314
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-897534 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [c7b279f3-6a39-48cb-af80-961f5ba411de] Pending
helpers_test.go:352: "busybox-mount" [c7b279f3-6a39-48cb-af80-961f5ba411de] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [c7b279f3-6a39-48cb-af80-961f5ba411de] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [c7b279f3-6a39-48cb-af80-961f5ba411de] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.00337066s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-897534 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-897534 /tmp/TestFunctionalparallelMountCmdany-port3700370279/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.76s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "343.413446ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "50.384479ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "360.390476ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "61.790284ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 image rm kicbase/echo-server:functional-897534 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-897534 /tmp/TestFunctionalparallelMountCmdspecific-port788913734/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-897534 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (327.548668ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1018 08:37:53.935039    9394 retry.go:31] will retry after 273.779838ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-897534 /tmp/TestFunctionalparallelMountCmdspecific-port788913734/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-897534 ssh "sudo umount -f /mount-9p": exit status 1 (314.539377ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-897534 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-897534 /tmp/TestFunctionalparallelMountCmdspecific-port788913734/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-897534 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3942137184/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-897534 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3942137184/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-897534 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3942137184/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-897534 ssh "findmnt -T" /mount1: exit status 1 (370.216718ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1018 08:37:55.649002    9394 retry.go:31] will retry after 329.739426ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-897534 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-897534 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3942137184/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-897534 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3942137184/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-897534 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3942137184/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.60s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-897534 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-897534 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-897534 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 48052: os: process already finished
helpers_test.go:519: unable to terminate pid 47730: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-897534 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-897534 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-897534 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [15c487a9-4339-4b91-8b28-4f08ec76d8d9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [15c487a9-4339-4b91-8b28-4f08ec76d8d9] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.003883861s
I1018 08:38:08.534317    9394 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.22s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-897534 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.101.62.172 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-897534 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-897534 service list: (1.696273526s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-897534 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-897534 service list -o json: (1.697339402s)
functional_test.go:1504: Took "1.697436721s" to run "out/minikube-linux-amd64 -p functional-897534 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.70s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-897534
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-897534
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-897534
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (147.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-128433 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m26.807369996s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (147.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-128433 kubectl -- rollout status deployment/busybox: (3.40547676s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 kubectl -- exec busybox-7b57f96db7-74hd7 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 kubectl -- exec busybox-7b57f96db7-brmqk -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 kubectl -- exec busybox-7b57f96db7-hvdqd -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 kubectl -- exec busybox-7b57f96db7-74hd7 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 kubectl -- exec busybox-7b57f96db7-brmqk -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 kubectl -- exec busybox-7b57f96db7-hvdqd -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 kubectl -- exec busybox-7b57f96db7-74hd7 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 kubectl -- exec busybox-7b57f96db7-brmqk -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 kubectl -- exec busybox-7b57f96db7-hvdqd -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 kubectl -- exec busybox-7b57f96db7-74hd7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 kubectl -- exec busybox-7b57f96db7-74hd7 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 kubectl -- exec busybox-7b57f96db7-brmqk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 kubectl -- exec busybox-7b57f96db7-brmqk -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 kubectl -- exec busybox-7b57f96db7-hvdqd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 kubectl -- exec busybox-7b57f96db7-hvdqd -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (25.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-128433 node add --alsologtostderr -v 5: (24.189650696s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (25.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-128433 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 cp testdata/cp-test.txt ha-128433:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 ssh -n ha-128433 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 cp ha-128433:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4114314191/001/cp-test_ha-128433.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 ssh -n ha-128433 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 cp ha-128433:/home/docker/cp-test.txt ha-128433-m02:/home/docker/cp-test_ha-128433_ha-128433-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 ssh -n ha-128433 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 ssh -n ha-128433-m02 "sudo cat /home/docker/cp-test_ha-128433_ha-128433-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 cp ha-128433:/home/docker/cp-test.txt ha-128433-m03:/home/docker/cp-test_ha-128433_ha-128433-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 ssh -n ha-128433 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 ssh -n ha-128433-m03 "sudo cat /home/docker/cp-test_ha-128433_ha-128433-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 cp ha-128433:/home/docker/cp-test.txt ha-128433-m04:/home/docker/cp-test_ha-128433_ha-128433-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 ssh -n ha-128433 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 ssh -n ha-128433-m04 "sudo cat /home/docker/cp-test_ha-128433_ha-128433-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 cp testdata/cp-test.txt ha-128433-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 ssh -n ha-128433-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 cp ha-128433-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4114314191/001/cp-test_ha-128433-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 ssh -n ha-128433-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 cp ha-128433-m02:/home/docker/cp-test.txt ha-128433:/home/docker/cp-test_ha-128433-m02_ha-128433.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 ssh -n ha-128433-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 ssh -n ha-128433 "sudo cat /home/docker/cp-test_ha-128433-m02_ha-128433.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 cp ha-128433-m02:/home/docker/cp-test.txt ha-128433-m03:/home/docker/cp-test_ha-128433-m02_ha-128433-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 ssh -n ha-128433-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 ssh -n ha-128433-m03 "sudo cat /home/docker/cp-test_ha-128433-m02_ha-128433-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 cp ha-128433-m02:/home/docker/cp-test.txt ha-128433-m04:/home/docker/cp-test_ha-128433-m02_ha-128433-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 ssh -n ha-128433-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 ssh -n ha-128433-m04 "sudo cat /home/docker/cp-test_ha-128433-m02_ha-128433-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 cp testdata/cp-test.txt ha-128433-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 ssh -n ha-128433-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 cp ha-128433-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4114314191/001/cp-test_ha-128433-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 ssh -n ha-128433-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 cp ha-128433-m03:/home/docker/cp-test.txt ha-128433:/home/docker/cp-test_ha-128433-m03_ha-128433.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 ssh -n ha-128433-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 ssh -n ha-128433 "sudo cat /home/docker/cp-test_ha-128433-m03_ha-128433.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 cp ha-128433-m03:/home/docker/cp-test.txt ha-128433-m02:/home/docker/cp-test_ha-128433-m03_ha-128433-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 ssh -n ha-128433-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 ssh -n ha-128433-m02 "sudo cat /home/docker/cp-test_ha-128433-m03_ha-128433-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 cp ha-128433-m03:/home/docker/cp-test.txt ha-128433-m04:/home/docker/cp-test_ha-128433-m03_ha-128433-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 ssh -n ha-128433-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 ssh -n ha-128433-m04 "sudo cat /home/docker/cp-test_ha-128433-m03_ha-128433-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 cp testdata/cp-test.txt ha-128433-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 ssh -n ha-128433-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 cp ha-128433-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4114314191/001/cp-test_ha-128433-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 ssh -n ha-128433-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 cp ha-128433-m04:/home/docker/cp-test.txt ha-128433:/home/docker/cp-test_ha-128433-m04_ha-128433.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 ssh -n ha-128433-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 ssh -n ha-128433 "sudo cat /home/docker/cp-test_ha-128433-m04_ha-128433.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 cp ha-128433-m04:/home/docker/cp-test.txt ha-128433-m02:/home/docker/cp-test_ha-128433-m04_ha-128433-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 ssh -n ha-128433-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 ssh -n ha-128433-m02 "sudo cat /home/docker/cp-test_ha-128433-m04_ha-128433-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 cp ha-128433-m04:/home/docker/cp-test.txt ha-128433-m03:/home/docker/cp-test_ha-128433-m04_ha-128433-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 ssh -n ha-128433-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 ssh -n ha-128433-m03 "sudo cat /home/docker/cp-test_ha-128433-m04_ha-128433-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (14.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-128433 node stop m02 --alsologtostderr -v 5: (13.554452589s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-128433 status --alsologtostderr -v 5: exit status 7 (693.686025ms)

                                                
                                                
-- stdout --
	ha-128433
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-128433-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-128433-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-128433-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 08:51:33.587462   74476 out.go:360] Setting OutFile to fd 1 ...
	I1018 08:51:33.587914   74476 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:51:33.587926   74476 out.go:374] Setting ErrFile to fd 2...
	I1018 08:51:33.587930   74476 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:51:33.588184   74476 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-5897/.minikube/bin
	I1018 08:51:33.588433   74476 out.go:368] Setting JSON to false
	I1018 08:51:33.588463   74476 mustload.go:65] Loading cluster: ha-128433
	I1018 08:51:33.588577   74476 notify.go:220] Checking for updates...
	I1018 08:51:33.588985   74476 config.go:182] Loaded profile config "ha-128433": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:51:33.589007   74476 status.go:174] checking status of ha-128433 ...
	I1018 08:51:33.589548   74476 cli_runner.go:164] Run: docker container inspect ha-128433 --format={{.State.Status}}
	I1018 08:51:33.609835   74476 status.go:371] ha-128433 host status = "Running" (err=<nil>)
	I1018 08:51:33.609857   74476 host.go:66] Checking if "ha-128433" exists ...
	I1018 08:51:33.610127   74476 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-128433
	I1018 08:51:33.629975   74476 host.go:66] Checking if "ha-128433" exists ...
	I1018 08:51:33.630333   74476 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 08:51:33.630447   74476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-128433
	I1018 08:51:33.649513   74476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/ha-128433/id_rsa Username:docker}
	I1018 08:51:33.745987   74476 ssh_runner.go:195] Run: systemctl --version
	I1018 08:51:33.752850   74476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 08:51:33.765492   74476 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 08:51:33.824213   74476 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-18 08:51:33.812940154 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 08:51:33.824789   74476 kubeconfig.go:125] found "ha-128433" server: "https://192.168.49.254:8443"
	I1018 08:51:33.824824   74476 api_server.go:166] Checking apiserver status ...
	I1018 08:51:33.824877   74476 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 08:51:33.836847   74476 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1268/cgroup
	W1018 08:51:33.845829   74476 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1268/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1018 08:51:33.845886   74476 ssh_runner.go:195] Run: ls
	I1018 08:51:33.849903   74476 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1018 08:51:33.854051   74476 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1018 08:51:33.854082   74476 status.go:463] ha-128433 apiserver status = Running (err=<nil>)
	I1018 08:51:33.854093   74476 status.go:176] ha-128433 status: &{Name:ha-128433 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 08:51:33.854112   74476 status.go:174] checking status of ha-128433-m02 ...
	I1018 08:51:33.854405   74476 cli_runner.go:164] Run: docker container inspect ha-128433-m02 --format={{.State.Status}}
	I1018 08:51:33.873029   74476 status.go:371] ha-128433-m02 host status = "Stopped" (err=<nil>)
	I1018 08:51:33.873048   74476 status.go:384] host is not running, skipping remaining checks
	I1018 08:51:33.873054   74476 status.go:176] ha-128433-m02 status: &{Name:ha-128433-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 08:51:33.873073   74476 status.go:174] checking status of ha-128433-m03 ...
	I1018 08:51:33.873369   74476 cli_runner.go:164] Run: docker container inspect ha-128433-m03 --format={{.State.Status}}
	I1018 08:51:33.892910   74476 status.go:371] ha-128433-m03 host status = "Running" (err=<nil>)
	I1018 08:51:33.892939   74476 host.go:66] Checking if "ha-128433-m03" exists ...
	I1018 08:51:33.893169   74476 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-128433-m03
	I1018 08:51:33.911197   74476 host.go:66] Checking if "ha-128433-m03" exists ...
	I1018 08:51:33.911465   74476 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 08:51:33.911501   74476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-128433-m03
	I1018 08:51:33.930027   74476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/ha-128433-m03/id_rsa Username:docker}
	I1018 08:51:34.024720   74476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 08:51:34.038036   74476 kubeconfig.go:125] found "ha-128433" server: "https://192.168.49.254:8443"
	I1018 08:51:34.038061   74476 api_server.go:166] Checking apiserver status ...
	I1018 08:51:34.038093   74476 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 08:51:34.049927   74476 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1162/cgroup
	W1018 08:51:34.058658   74476 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1162/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1018 08:51:34.058704   74476 ssh_runner.go:195] Run: ls
	I1018 08:51:34.062593   74476 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1018 08:51:34.066752   74476 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1018 08:51:34.066780   74476 status.go:463] ha-128433-m03 apiserver status = Running (err=<nil>)
	I1018 08:51:34.066791   74476 status.go:176] ha-128433-m03 status: &{Name:ha-128433-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 08:51:34.066806   74476 status.go:174] checking status of ha-128433-m04 ...
	I1018 08:51:34.067090   74476 cli_runner.go:164] Run: docker container inspect ha-128433-m04 --format={{.State.Status}}
	I1018 08:51:34.085144   74476 status.go:371] ha-128433-m04 host status = "Running" (err=<nil>)
	I1018 08:51:34.085166   74476 host.go:66] Checking if "ha-128433-m04" exists ...
	I1018 08:51:34.085479   74476 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-128433-m04
	I1018 08:51:34.104283   74476 host.go:66] Checking if "ha-128433-m04" exists ...
	I1018 08:51:34.104611   74476 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 08:51:34.104654   74476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-128433-m04
	I1018 08:51:34.124848   74476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/ha-128433-m04/id_rsa Username:docker}
	I1018 08:51:34.218645   74476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 08:51:34.231668   74476 status.go:176] ha-128433-m04 status: &{Name:ha-128433-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (14.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (8.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-128433 node start m02 --alsologtostderr -v 5: (7.851110659s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (8.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (98.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 stop --alsologtostderr -v 5
E1018 08:51:49.586569    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-128433 stop --alsologtostderr -v 5: (43.584733203s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 start --wait true --alsologtostderr -v 5
E1018 08:52:45.291564    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/functional-897534/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:52:45.297980    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/functional-897534/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:52:45.309460    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/functional-897534/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:52:45.330905    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/functional-897534/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:52:45.372417    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/functional-897534/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:52:45.453845    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/functional-897534/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:52:45.615271    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/functional-897534/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:52:45.936816    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/functional-897534/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:52:46.578593    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/functional-897534/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:52:47.860571    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/functional-897534/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:52:50.421912    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/functional-897534/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:52:55.543478    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/functional-897534/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:53:05.785086    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/functional-897534/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:53:12.660308    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-128433 start --wait true --alsologtostderr -v 5: (55.137979782s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (98.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 node delete m03 --alsologtostderr -v 5
E1018 08:53:26.266500    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/functional-897534/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-128433 node delete m03 --alsologtostderr -v 5: (9.798318885s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (41.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 stop --alsologtostderr -v 5
E1018 08:54:07.228958    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/functional-897534/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-128433 stop --alsologtostderr -v 5: (41.532442236s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-128433 status --alsologtostderr -v 5: exit status 7 (110.127148ms)

                                                
                                                
-- stdout --
	ha-128433
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-128433-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-128433-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 08:54:16.366157   88566 out.go:360] Setting OutFile to fd 1 ...
	I1018 08:54:16.366449   88566 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:54:16.366460   88566 out.go:374] Setting ErrFile to fd 2...
	I1018 08:54:16.366464   88566 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:54:16.366673   88566 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-5897/.minikube/bin
	I1018 08:54:16.366859   88566 out.go:368] Setting JSON to false
	I1018 08:54:16.366882   88566 mustload.go:65] Loading cluster: ha-128433
	I1018 08:54:16.366969   88566 notify.go:220] Checking for updates...
	I1018 08:54:16.367437   88566 config.go:182] Loaded profile config "ha-128433": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:54:16.367461   88566 status.go:174] checking status of ha-128433 ...
	I1018 08:54:16.367931   88566 cli_runner.go:164] Run: docker container inspect ha-128433 --format={{.State.Status}}
	I1018 08:54:16.389222   88566 status.go:371] ha-128433 host status = "Stopped" (err=<nil>)
	I1018 08:54:16.389267   88566 status.go:384] host is not running, skipping remaining checks
	I1018 08:54:16.389275   88566 status.go:176] ha-128433 status: &{Name:ha-128433 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 08:54:16.389330   88566 status.go:174] checking status of ha-128433-m02 ...
	I1018 08:54:16.389736   88566 cli_runner.go:164] Run: docker container inspect ha-128433-m02 --format={{.State.Status}}
	I1018 08:54:16.408168   88566 status.go:371] ha-128433-m02 host status = "Stopped" (err=<nil>)
	I1018 08:54:16.408189   88566 status.go:384] host is not running, skipping remaining checks
	I1018 08:54:16.408195   88566 status.go:176] ha-128433-m02 status: &{Name:ha-128433-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 08:54:16.408220   88566 status.go:174] checking status of ha-128433-m04 ...
	I1018 08:54:16.408539   88566 cli_runner.go:164] Run: docker container inspect ha-128433-m04 --format={{.State.Status}}
	I1018 08:54:16.426900   88566 status.go:371] ha-128433-m04 host status = "Stopped" (err=<nil>)
	I1018 08:54:16.426922   88566 status.go:384] host is not running, skipping remaining checks
	I1018 08:54:16.426928   88566 status.go:176] ha-128433-m04 status: &{Name:ha-128433-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (41.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (56.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-128433 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (56.098467255s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (56.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (41.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 node add --control-plane --alsologtostderr -v 5
E1018 08:55:29.153560    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/functional-897534/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-128433 node add --control-plane --alsologtostderr -v 5: (40.538387085s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-128433 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (41.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.90s)

                                                
                                    
x
+
TestJSONOutput/start/Command (37.88s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-162687 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-162687 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (37.881136036s)
--- PASS: TestJSONOutput/start/Command (37.88s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.96s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-162687 --output=json --user=testUser
E1018 08:56:49.588054    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-162687 --output=json --user=testUser: (7.958574409s)
--- PASS: TestJSONOutput/stop/Command (7.96s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-928043 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-928043 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (70.936939ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"78cc1a53-f7e1-4dc0-b7cf-aaefbc761bc4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-928043] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4fe7267a-7ebf-4d70-8583-504795e99f97","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21767"}}
	{"specversion":"1.0","id":"c68d94aa-d732-4892-b0c3-157c111cf5bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5a33e842-129d-4ce8-bb9d-b28d3ee56bef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21767-5897/kubeconfig"}}
	{"specversion":"1.0","id":"e793f20c-df39-49e5-a0c4-0c945c14ba16","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-5897/.minikube"}}
	{"specversion":"1.0","id":"23a70221-6ed4-433b-8ad0-350681a0ce43","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"56ac4b2e-197d-45ab-9ecd-dda400bb35ae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1e9cfc37-fe82-436a-b236-e749196cad54","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-928043" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-928043
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (32.23s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-224276 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-224276 --network=: (30.034902377s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-224276" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-224276
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-224276: (2.175728007s)
--- PASS: TestKicCustomNetwork/create_custom_network (32.23s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (23.48s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-594244 --network=bridge
E1018 08:57:45.293177    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/functional-897534/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-594244 --network=bridge: (21.444071832s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-594244" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-594244
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-594244: (2.016704418s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (23.48s)

                                                
                                    
x
+
TestKicExistingNetwork (24.8s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1018 08:57:54.135043    9394 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1018 08:57:54.152730    9394 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1018 08:57:54.152806    9394 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1018 08:57:54.152826    9394 cli_runner.go:164] Run: docker network inspect existing-network
W1018 08:57:54.170240    9394 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1018 08:57:54.170268    9394 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1018 08:57:54.170293    9394 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1018 08:57:54.170419    9394 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1018 08:57:54.188851    9394 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0a5d0734e8e5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ba:09:81:3f:ef:cf} reservation:<nil>}
I1018 08:57:54.189213    9394 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000013480}
I1018 08:57:54.189261    9394 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1018 08:57:54.189313    9394 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1018 08:57:54.246995    9394 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-014993 --network=existing-network
E1018 08:58:12.996534    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/functional-897534/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-014993 --network=existing-network: (22.656953141s)
helpers_test.go:175: Cleaning up "existing-network-014993" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-014993
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-014993: (1.989650871s)
I1018 08:58:18.911811    9394 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (24.80s)

                                                
                                    
x
+
TestKicCustomSubnet (24.22s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-013065 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-013065 --subnet=192.168.60.0/24: (22.017287896s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-013065 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-013065" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-013065
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-013065: (2.186423741s)
--- PASS: TestKicCustomSubnet (24.22s)

                                                
                                    
x
+
TestKicStaticIP (24.27s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-867361 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-867361 --static-ip=192.168.200.200: (21.974110333s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-867361 ip
helpers_test.go:175: Cleaning up "static-ip-867361" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-867361
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-867361: (2.164692817s)
--- PASS: TestKicStaticIP (24.27s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (48.15s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-724641 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-724641 --driver=docker  --container-runtime=crio: (20.82526177s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-727415 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-727415 --driver=docker  --container-runtime=crio: (21.378424512s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-724641
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-727415
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-727415" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-727415
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-727415: (2.384827464s)
helpers_test.go:175: Cleaning up "first-724641" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-724641
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-724641: (2.360411388s)
--- PASS: TestMinikubeProfile (48.15s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.84s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-927247 --memory=3072 --mount-string /tmp/TestMountStartserial2517538473/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-927247 --memory=3072 --mount-string /tmp/TestMountStartserial2517538473/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.843788344s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.84s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-927247 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.26s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-938298 --memory=3072 --mount-string /tmp/TestMountStartserial2517538473/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-938298 --memory=3072 --mount-string /tmp/TestMountStartserial2517538473/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.262340487s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.26s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-938298 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-927247 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-927247 --alsologtostderr -v=5: (1.699170295s)
--- PASS: TestMountStart/serial/DeleteFirst (1.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-938298 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-938298
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-938298: (1.245707958s)
--- PASS: TestMountStart/serial/Stop (1.25s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.08s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-938298
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-938298: (6.079576262s)
--- PASS: TestMountStart/serial/RestartStopped (7.08s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-938298 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (93.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-013332 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1018 09:01:49.587748    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-013332 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m32.641041853s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-013332 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (93.12s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-013332 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-013332 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-013332 -- rollout status deployment/busybox: (2.076026973s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-013332 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-013332 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-013332 -- exec busybox-7b57f96db7-f8lrn -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-013332 -- exec busybox-7b57f96db7-fdzld -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-013332 -- exec busybox-7b57f96db7-f8lrn -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-013332 -- exec busybox-7b57f96db7-fdzld -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-013332 -- exec busybox-7b57f96db7-f8lrn -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-013332 -- exec busybox-7b57f96db7-fdzld -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.45s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-013332 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-013332 -- exec busybox-7b57f96db7-f8lrn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-013332 -- exec busybox-7b57f96db7-f8lrn -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-013332 -- exec busybox-7b57f96db7-fdzld -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-013332 -- exec busybox-7b57f96db7-fdzld -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.67s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (27.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-013332 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-013332 -v=5 --alsologtostderr: (26.657469213s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-013332 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (27.29s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-013332 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.64s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-013332 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-013332 cp testdata/cp-test.txt multinode-013332:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-013332 ssh -n multinode-013332 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-013332 cp multinode-013332:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2074987250/001/cp-test_multinode-013332.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-013332 ssh -n multinode-013332 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-013332 cp multinode-013332:/home/docker/cp-test.txt multinode-013332-m02:/home/docker/cp-test_multinode-013332_multinode-013332-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-013332 ssh -n multinode-013332 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-013332 ssh -n multinode-013332-m02 "sudo cat /home/docker/cp-test_multinode-013332_multinode-013332-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-013332 cp multinode-013332:/home/docker/cp-test.txt multinode-013332-m03:/home/docker/cp-test_multinode-013332_multinode-013332-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-013332 ssh -n multinode-013332 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-013332 ssh -n multinode-013332-m03 "sudo cat /home/docker/cp-test_multinode-013332_multinode-013332-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-013332 cp testdata/cp-test.txt multinode-013332-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-013332 ssh -n multinode-013332-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-013332 cp multinode-013332-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2074987250/001/cp-test_multinode-013332-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-013332 ssh -n multinode-013332-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-013332 cp multinode-013332-m02:/home/docker/cp-test.txt multinode-013332:/home/docker/cp-test_multinode-013332-m02_multinode-013332.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-013332 ssh -n multinode-013332-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-013332 ssh -n multinode-013332 "sudo cat /home/docker/cp-test_multinode-013332-m02_multinode-013332.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-013332 cp multinode-013332-m02:/home/docker/cp-test.txt multinode-013332-m03:/home/docker/cp-test_multinode-013332-m02_multinode-013332-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-013332 ssh -n multinode-013332-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-013332 ssh -n multinode-013332-m03 "sudo cat /home/docker/cp-test_multinode-013332-m02_multinode-013332-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-013332 cp testdata/cp-test.txt multinode-013332-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-013332 ssh -n multinode-013332-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-013332 cp multinode-013332-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2074987250/001/cp-test_multinode-013332-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-013332 ssh -n multinode-013332-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-013332 cp multinode-013332-m03:/home/docker/cp-test.txt multinode-013332:/home/docker/cp-test_multinode-013332-m03_multinode-013332.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-013332 ssh -n multinode-013332-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-013332 ssh -n multinode-013332 "sudo cat /home/docker/cp-test_multinode-013332-m03_multinode-013332.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-013332 cp multinode-013332-m03:/home/docker/cp-test.txt multinode-013332-m02:/home/docker/cp-test_multinode-013332-m03_multinode-013332-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-013332 ssh -n multinode-013332-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-013332 ssh -n multinode-013332-m02 "sudo cat /home/docker/cp-test_multinode-013332-m03_multinode-013332-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.51s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-013332 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-013332 node stop m03: (1.257669135s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-013332 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-013332 status: exit status 7 (487.699963ms)

                                                
                                                
-- stdout --
	multinode-013332
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-013332-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-013332-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-013332 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-013332 status --alsologtostderr: exit status 7 (490.573299ms)

                                                
                                                
-- stdout --
	multinode-013332
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-013332-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-013332-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:02:39.229614  148106 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:02:39.229898  148106 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:02:39.229910  148106 out.go:374] Setting ErrFile to fd 2...
	I1018 09:02:39.229916  148106 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:02:39.230108  148106 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-5897/.minikube/bin
	I1018 09:02:39.230297  148106 out.go:368] Setting JSON to false
	I1018 09:02:39.230327  148106 mustload.go:65] Loading cluster: multinode-013332
	I1018 09:02:39.230427  148106 notify.go:220] Checking for updates...
	I1018 09:02:39.230755  148106 config.go:182] Loaded profile config "multinode-013332": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:02:39.230772  148106 status.go:174] checking status of multinode-013332 ...
	I1018 09:02:39.231222  148106 cli_runner.go:164] Run: docker container inspect multinode-013332 --format={{.State.Status}}
	I1018 09:02:39.251228  148106 status.go:371] multinode-013332 host status = "Running" (err=<nil>)
	I1018 09:02:39.251253  148106 host.go:66] Checking if "multinode-013332" exists ...
	I1018 09:02:39.251562  148106 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-013332
	I1018 09:02:39.269874  148106 host.go:66] Checking if "multinode-013332" exists ...
	I1018 09:02:39.270166  148106 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:02:39.270224  148106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-013332
	I1018 09:02:39.288600  148106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/multinode-013332/id_rsa Username:docker}
	I1018 09:02:39.382694  148106 ssh_runner.go:195] Run: systemctl --version
	I1018 09:02:39.388781  148106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:02:39.401245  148106 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:02:39.460449  148106 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-10-18 09:02:39.450120835 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:02:39.460957  148106 kubeconfig.go:125] found "multinode-013332" server: "https://192.168.67.2:8443"
	I1018 09:02:39.460980  148106 api_server.go:166] Checking apiserver status ...
	I1018 09:02:39.461012  148106 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:02:39.472999  148106 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1235/cgroup
	W1018 09:02:39.481909  148106 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1235/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1018 09:02:39.481978  148106 ssh_runner.go:195] Run: ls
	I1018 09:02:39.485948  148106 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1018 09:02:39.489993  148106 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1018 09:02:39.490014  148106 status.go:463] multinode-013332 apiserver status = Running (err=<nil>)
	I1018 09:02:39.490024  148106 status.go:176] multinode-013332 status: &{Name:multinode-013332 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 09:02:39.490039  148106 status.go:174] checking status of multinode-013332-m02 ...
	I1018 09:02:39.490265  148106 cli_runner.go:164] Run: docker container inspect multinode-013332-m02 --format={{.State.Status}}
	I1018 09:02:39.508877  148106 status.go:371] multinode-013332-m02 host status = "Running" (err=<nil>)
	I1018 09:02:39.508904  148106 host.go:66] Checking if "multinode-013332-m02" exists ...
	I1018 09:02:39.509158  148106 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-013332-m02
	I1018 09:02:39.527590  148106 host.go:66] Checking if "multinode-013332-m02" exists ...
	I1018 09:02:39.527902  148106 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:02:39.527965  148106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-013332-m02
	I1018 09:02:39.545650  148106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21767-5897/.minikube/machines/multinode-013332-m02/id_rsa Username:docker}
	I1018 09:02:39.639589  148106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:02:39.652492  148106 status.go:176] multinode-013332-m02 status: &{Name:multinode-013332-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1018 09:02:39.652530  148106 status.go:174] checking status of multinode-013332-m03 ...
	I1018 09:02:39.652871  148106 cli_runner.go:164] Run: docker container inspect multinode-013332-m03 --format={{.State.Status}}
	I1018 09:02:39.673217  148106 status.go:371] multinode-013332-m03 host status = "Stopped" (err=<nil>)
	I1018 09:02:39.673241  148106 status.go:384] host is not running, skipping remaining checks
	I1018 09:02:39.673249  148106 status.go:176] multinode-013332-m03 status: &{Name:multinode-013332-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.24s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-013332 node start m03 -v=5 --alsologtostderr
E1018 09:02:45.291856    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/functional-897534/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-013332 node start m03 -v=5 --alsologtostderr: (6.440445484s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-013332 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.14s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (55.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-013332
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-013332
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-013332: (29.517159531s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-013332 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-013332 --wait=true -v=5 --alsologtostderr: (26.059039216s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-013332
--- PASS: TestMultiNode/serial/RestartKeepsNodes (55.68s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-013332 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-013332 node delete m03: (4.422253622s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-013332 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.01s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (17.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-013332 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-013332 stop: (17.331955531s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-013332 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-013332 status: exit status 7 (89.307498ms)

                                                
                                                
-- stdout --
	multinode-013332
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-013332-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-013332 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-013332 status --alsologtostderr: exit status 7 (85.640791ms)

                                                
                                                
-- stdout --
	multinode-013332
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-013332-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:04:04.975117  156894 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:04:04.975414  156894 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:04:04.975424  156894 out.go:374] Setting ErrFile to fd 2...
	I1018 09:04:04.975428  156894 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:04:04.975611  156894 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-5897/.minikube/bin
	I1018 09:04:04.975776  156894 out.go:368] Setting JSON to false
	I1018 09:04:04.975797  156894 mustload.go:65] Loading cluster: multinode-013332
	I1018 09:04:04.975851  156894 notify.go:220] Checking for updates...
	I1018 09:04:04.976282  156894 config.go:182] Loaded profile config "multinode-013332": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:04:04.976302  156894 status.go:174] checking status of multinode-013332 ...
	I1018 09:04:04.976832  156894 cli_runner.go:164] Run: docker container inspect multinode-013332 --format={{.State.Status}}
	I1018 09:04:04.996559  156894 status.go:371] multinode-013332 host status = "Stopped" (err=<nil>)
	I1018 09:04:04.996579  156894 status.go:384] host is not running, skipping remaining checks
	I1018 09:04:04.996585  156894 status.go:176] multinode-013332 status: &{Name:multinode-013332 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 09:04:04.996640  156894 status.go:174] checking status of multinode-013332-m02 ...
	I1018 09:04:04.996890  156894 cli_runner.go:164] Run: docker container inspect multinode-013332-m02 --format={{.State.Status}}
	I1018 09:04:05.015001  156894 status.go:371] multinode-013332-m02 host status = "Stopped" (err=<nil>)
	I1018 09:04:05.015026  156894 status.go:384] host is not running, skipping remaining checks
	I1018 09:04:05.015052  156894 status.go:176] multinode-013332-m02 status: &{Name:multinode-013332-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (17.51s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (26.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-013332 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-013332 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (26.257078377s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-013332 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (26.85s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (24.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-013332
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-013332-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-013332-m02 --driver=docker  --container-runtime=crio: exit status 14 (65.503019ms)

                                                
                                                
-- stdout --
	* [multinode-013332-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21767
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21767-5897/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-5897/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-013332-m02' is duplicated with machine name 'multinode-013332-m02' in profile 'multinode-013332'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-013332-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-013332-m03 --driver=docker  --container-runtime=crio: (22.085559867s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-013332
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-013332: exit status 80 (283.129115ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-013332 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-013332-m03 already exists in multinode-013332-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-013332-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-013332-m03: (2.437586565s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (24.92s)

                                                
                                    
x
+
TestPreload (100.21s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-611572 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-611572 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (45.166679058s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-611572 image pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-611572
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-611572: (5.896860587s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-611572 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-611572 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (45.494803409s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-611572 image list
helpers_test.go:175: Cleaning up "test-preload-611572" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-611572
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-611572: (2.476496051s)
--- PASS: TestPreload (100.21s)

                                                
                                    
x
+
TestScheduledStopUnix (97.43s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-490122 --memory=3072 --driver=docker  --container-runtime=crio
E1018 09:06:49.589548    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-490122 --memory=3072 --driver=docker  --container-runtime=crio: (21.455623423s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-490122 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-490122 -n scheduled-stop-490122
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-490122 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1018 09:07:03.102597    9394 retry.go:31] will retry after 142.956µs: open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/scheduled-stop-490122/pid: no such file or directory
I1018 09:07:03.103772    9394 retry.go:31] will retry after 224.07µs: open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/scheduled-stop-490122/pid: no such file or directory
I1018 09:07:03.104921    9394 retry.go:31] will retry after 157.47µs: open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/scheduled-stop-490122/pid: no such file or directory
I1018 09:07:03.106049    9394 retry.go:31] will retry after 369.88µs: open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/scheduled-stop-490122/pid: no such file or directory
I1018 09:07:03.107188    9394 retry.go:31] will retry after 326.738µs: open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/scheduled-stop-490122/pid: no such file or directory
I1018 09:07:03.108320    9394 retry.go:31] will retry after 524.807µs: open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/scheduled-stop-490122/pid: no such file or directory
I1018 09:07:03.109451    9394 retry.go:31] will retry after 1.663061ms: open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/scheduled-stop-490122/pid: no such file or directory
I1018 09:07:03.111640    9394 retry.go:31] will retry after 2.170983ms: open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/scheduled-stop-490122/pid: no such file or directory
I1018 09:07:03.114869    9394 retry.go:31] will retry after 3.433029ms: open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/scheduled-stop-490122/pid: no such file or directory
I1018 09:07:03.119108    9394 retry.go:31] will retry after 2.809384ms: open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/scheduled-stop-490122/pid: no such file or directory
I1018 09:07:03.122442    9394 retry.go:31] will retry after 5.057759ms: open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/scheduled-stop-490122/pid: no such file or directory
I1018 09:07:03.127653    9394 retry.go:31] will retry after 10.979445ms: open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/scheduled-stop-490122/pid: no such file or directory
I1018 09:07:03.138980    9394 retry.go:31] will retry after 13.692977ms: open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/scheduled-stop-490122/pid: no such file or directory
I1018 09:07:03.153267    9394 retry.go:31] will retry after 19.445891ms: open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/scheduled-stop-490122/pid: no such file or directory
I1018 09:07:03.173959    9394 retry.go:31] will retry after 33.992213ms: open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/scheduled-stop-490122/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-490122 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-490122 -n scheduled-stop-490122
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-490122
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-490122 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1018 09:07:45.291585    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/functional-897534/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-490122
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-490122: exit status 7 (69.205295ms)

                                                
                                                
-- stdout --
	scheduled-stop-490122
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-490122 -n scheduled-stop-490122
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-490122 -n scheduled-stop-490122: exit status 7 (69.66772ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-490122" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-490122
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-490122: (4.573607389s)
--- PASS: TestScheduledStopUnix (97.43s)

                                                
                                    
x
+
TestInsufficientStorage (10.2s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-754534 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-754534 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.664755051s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"82184591-4f17-45f7-8216-c5034622d896","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-754534] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"bce5f724-d715-456d-b13e-ee7aa7be439f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21767"}}
	{"specversion":"1.0","id":"c7fbd360-154a-4399-bcd2-5658242fe7d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ed428354-f0f8-4d97-808e-6ad58c4685e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21767-5897/kubeconfig"}}
	{"specversion":"1.0","id":"44857493-153f-485e-89fe-1ce2f2bb74f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-5897/.minikube"}}
	{"specversion":"1.0","id":"7a8192db-14fe-4ce3-9413-28014611641b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"5d85a38a-55de-43c6-a6df-ef76e00fe3eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3fc116eb-1359-4700-a998-5d83d01250d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"a26e0aef-c16e-4662-9465-9ca42763efcf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"28f8e9af-3614-4aa7-a759-68470f0b95b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"a895250f-455b-438a-bb49-c2ccba91551b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"0b582fc0-a545-45a3-84db-f8b35fd0805c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-754534\" primary control-plane node in \"insufficient-storage-754534\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"290ad6d5-14e9-498f-afaa-aa83c891514f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1760609789-21757 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"d54d6944-4f05-4d04-84e0-eccff73484c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"4fb462ae-d39b-465b-bbea-bf6115ceec5f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-754534 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-754534 --output=json --layout=cluster: exit status 7 (285.543672ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-754534","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-754534","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1018 09:08:26.593727  176989 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-754534" does not appear in /home/jenkins/minikube-integration/21767-5897/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-754534 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-754534 --output=json --layout=cluster: exit status 7 (282.911291ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-754534","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-754534","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1018 09:08:26.877115  177099 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-754534" does not appear in /home/jenkins/minikube-integration/21767-5897/kubeconfig
	E1018 09:08:26.887765  177099 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/insufficient-storage-754534/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-754534" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-754534
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-754534: (1.963119417s)
--- PASS: TestInsufficientStorage (10.20s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (54.5s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.3547361766 start -p running-upgrade-152288 --memory=3072 --vm-driver=docker  --container-runtime=crio
E1018 09:09:52.661768    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.3547361766 start -p running-upgrade-152288 --memory=3072 --vm-driver=docker  --container-runtime=crio: (25.943071189s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-152288 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-152288 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (25.554121108s)
helpers_test.go:175: Cleaning up "running-upgrade-152288" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-152288
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-152288: (2.507814095s)
--- PASS: TestRunningBinaryUpgrade (54.50s)

                                                
                                    
x
+
TestKubernetesUpgrade (316.48s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-463301 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-463301 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (39.130618792s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-463301
E1018 09:09:08.359937    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/functional-897534/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-463301: (2.399829993s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-463301 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-463301 status --format={{.Host}}: exit status 7 (116.92049ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-463301 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-463301 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m25.221569346s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-463301 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-463301 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-463301 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (97.093294ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-463301] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21767
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21767-5897/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-5897/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-463301
	    minikube start -p kubernetes-upgrade-463301 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4633012 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-463301 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-463301 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-463301 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (6.167687834s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-463301" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-463301
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-463301: (3.276363806s)
--- PASS: TestKubernetesUpgrade (316.48s)

                                                
                                    
x
+
TestMissingContainerUpgrade (100.22s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.1798233684 start -p missing-upgrade-196626 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.1798233684 start -p missing-upgrade-196626 --memory=3072 --driver=docker  --container-runtime=crio: (48.218467554s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-196626
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-196626: (4.876590348s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-196626
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-196626 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-196626 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (44.138078669s)
helpers_test.go:175: Cleaning up "missing-upgrade-196626" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-196626
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-196626: (2.435531214s)
--- PASS: TestMissingContainerUpgrade (100.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (11.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-448954 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-448954 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (1.023427488s)

                                                
                                                
-- stdout --
	* [false-448954] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21767
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21767-5897/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-5897/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:08:33.203072  179101 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:08:33.203211  179101 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:08:33.203223  179101 out.go:374] Setting ErrFile to fd 2...
	I1018 09:08:33.203230  179101 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:08:33.203585  179101 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-5897/.minikube/bin
	I1018 09:08:33.204096  179101 out.go:368] Setting JSON to false
	I1018 09:08:33.205383  179101 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3061,"bootTime":1760775452,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 09:08:33.205469  179101 start.go:141] virtualization: kvm guest
	I1018 09:08:33.208044  179101 out.go:179] * [false-448954] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 09:08:33.211998  179101 out.go:179]   - MINIKUBE_LOCATION=21767
	I1018 09:08:33.212023  179101 notify.go:220] Checking for updates...
	I1018 09:08:33.214790  179101 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:08:33.216381  179101 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-5897/kubeconfig
	I1018 09:08:33.217697  179101 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-5897/.minikube
	I1018 09:08:33.222962  179101 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 09:08:33.228154  179101 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:08:33.230127  179101 config.go:182] Loaded profile config "kubernetes-upgrade-463301": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1018 09:08:33.230272  179101 config.go:182] Loaded profile config "offline-crio-179679": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:08:33.230397  179101 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:08:33.259610  179101 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 09:08:33.259776  179101 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:08:33.367550  179101 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:67 SystemTime:2025-10-18 09:08:33.35284451 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:08:33.367702  179101 docker.go:318] overlay module found
	I1018 09:08:33.470218  179101 out.go:179] * Using the docker driver based on user configuration
	I1018 09:08:33.592227  179101 start.go:305] selected driver: docker
	I1018 09:08:33.592263  179101 start.go:925] validating driver "docker" against <nil>
	I1018 09:08:33.592285  179101 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:08:33.689655  179101 out.go:203] 
	W1018 09:08:33.852282  179101 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1018 09:08:34.014403  179101 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-448954 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-448954

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-448954

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-448954

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-448954

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-448954

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-448954

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-448954

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-448954

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-448954

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-448954

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448954"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448954"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448954"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-448954

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448954"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448954"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-448954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-448954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-448954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-448954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-448954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-448954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-448954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-448954" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448954"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448954"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448954"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448954"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448954"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-448954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-448954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-448954" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448954"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448954"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448954"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448954"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448954"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-448954

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448954"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448954"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448954"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448954"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448954"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448954"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448954"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448954"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448954"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448954"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448954"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448954"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448954"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448954"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448954"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448954"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448954"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448954"

                                                
                                                
----------------------- debugLogs end: false-448954 [took: 9.846795504s] --------------------------------
helpers_test.go:175: Cleaning up "false-448954" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-448954
--- PASS: TestNetworkPlugins/group/false (11.11s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.42s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.42s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (47.08s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.122263649 start -p stopped-upgrade-104675 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.122263649 start -p stopped-upgrade-104675 --memory=3072 --vm-driver=docker  --container-runtime=crio: (27.719203136s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.122263649 -p stopped-upgrade-104675 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.122263649 -p stopped-upgrade-104675 stop: (5.089009218s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-104675 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-104675 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (14.274223829s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (47.08s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.99s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-104675
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.99s)

                                                
                                    
x
+
TestPause/serial/Start (71.99s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-182020 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-182020 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m11.986866386s)
--- PASS: TestPause/serial/Start (71.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-548249 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-548249 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (68.994984ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-548249] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21767
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21767-5897/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-5897/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (22.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-548249 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-548249 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (21.998230554s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-548249 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (22.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-548249 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-548249 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (14.857159344s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-548249 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-548249 status -o json: exit status 2 (300.08078ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-548249","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-548249
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-548249: (2.039992188s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (4.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-548249 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-548249 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (4.692609423s)
--- PASS: TestNoKubernetes/serial/Start (4.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-548249 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-548249 "sudo systemctl is-active --quiet service kubelet": exit status 1 (283.976817ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-548249
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-548249: (1.256817432s)
--- PASS: TestNoKubernetes/serial/Stop (1.26s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.01s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-182020 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-182020 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (6.001733233s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-548249 --driver=docker  --container-runtime=crio
E1018 09:11:49.586140    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-548249 --driver=docker  --container-runtime=crio: (6.751378317s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-548249 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-548249 "sudo systemctl is-active --quiet service kubelet": exit status 1 (300.64318ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (43.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-448954 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-448954 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (43.417743338s)
--- PASS: TestNetworkPlugins/group/auto/Start (43.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (40.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-448954 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-448954 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (40.435690462s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (40.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-448954 "pgrep -a kubelet"
I1018 09:12:41.252282    9394 config.go:182] Loaded profile config "auto-448954": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-448954 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-8vxtr" [d764a258-54bb-4bd8-98f7-1ed430ccdf1a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-8vxtr" [d764a258-54bb-4bd8-98f7-1ed430ccdf1a] Running
E1018 09:12:45.291388    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/functional-897534/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.005091118s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-8chzc" [bc6ccf61-4ab9-4eba-ac43-def6ab50349e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.0040977s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-448954 "pgrep -a kubelet"
I1018 09:12:50.384616    9394 config.go:182] Loaded profile config "kindnet-448954": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-448954 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-pdkh2" [5f88d871-3513-440f-8e93-6ba2bd953a24] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-pdkh2" [5f88d871-3513-440f-8e93-6ba2bd953a24] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.003330925s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-448954 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-448954 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-448954 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-448954 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-448954 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-448954 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (47.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-448954 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-448954 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (47.611832674s)
--- PASS: TestNetworkPlugins/group/calico/Start (47.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (51.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-448954 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-448954 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (51.568881524s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (51.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (41.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-448954 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-448954 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (41.803892595s)
--- PASS: TestNetworkPlugins/group/bridge/Start (41.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-mg7vx" [6d88da64-c513-429a-a0ed-ac99b9e0e796] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005050782s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (55.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-448954 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-448954 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (55.238164561s)
--- PASS: TestNetworkPlugins/group/flannel/Start (55.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-448954 "pgrep -a kubelet"
I1018 09:14:03.869000    9394 config.go:182] Loaded profile config "calico-448954": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-448954 replace --force -f testdata/netcat-deployment.yaml
I1018 09:14:04.457874    9394 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-f6r8k" [f47c4699-8550-4061-8e1f-83543f3872c9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-f6r8k" [f47c4699-8550-4061-8e1f-83543f3872c9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 8.005009953s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-448954 "pgrep -a kubelet"
I1018 09:14:11.906697    9394 config.go:182] Loaded profile config "custom-flannel-448954": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (8.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-448954 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-wh2pd" [f7894baa-1595-4b28-81da-a9e4cddd6589] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-wh2pd" [f7894baa-1595-4b28-81da-a9e4cddd6589] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 8.004211506s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (8.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-448954 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-448954 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-448954 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-448954 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-448954 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-448954 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-448954 "pgrep -a kubelet"
I1018 09:14:27.490517    9394 config.go:182] Loaded profile config "bridge-448954": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-448954 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-z8v28" [c761ab48-d012-4134-9467-229b6feb913d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-z8v28" [c761ab48-d012-4134-9467-229b6feb913d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004027145s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (69.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-448954 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-448954 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m9.739256323s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (69.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-448954 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-448954 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-448954 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (51.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-951975 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-951975 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (51.588336568s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (51.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-fdwnr" [875bd47a-bf7e-4297-8f09-b289ece1d3d1] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003276853s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (54.59s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-031066 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-031066 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (54.591416143s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (54.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-448954 "pgrep -a kubelet"
I1018 09:15:01.764171    9394 config.go:182] Loaded profile config "flannel-448954": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (8.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-448954 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-9bnht" [e13e6bdf-bade-456c-897b-0c0ba61bfe12] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-9bnht" [e13e6bdf-bade-456c-897b-0c0ba61bfe12] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 8.004480475s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (8.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-448954 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-448954 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-448954 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (71.58s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-880603 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-880603 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m11.579447696s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (71.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-951975 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [5e92717a-fb6d-4a62-a6dd-08ea5401487b] Pending
helpers_test.go:352: "busybox" [5e92717a-fb6d-4a62-a6dd-08ea5401487b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [5e92717a-fb6d-4a62-a6dd-08ea5401487b] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.003548267s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-951975 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-448954 "pgrep -a kubelet"
I1018 09:15:45.850017    9394 config.go:182] Loaded profile config "enable-default-cni-448954": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-448954 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-8hkp4" [4a893e65-0de8-4a12-9c64-5cb564f29301] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-8hkp4" [4a893e65-0de8-4a12-9c64-5cb564f29301] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.003680028s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (16.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-951975 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-951975 --alsologtostderr -v=3: (16.027449167s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (16.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (7.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-031066 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f45fe433-dd70-4ef8-86fc-49c43f3e3c71] Pending
helpers_test.go:352: "busybox" [f45fe433-dd70-4ef8-86fc-49c43f3e3c71] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f45fe433-dd70-4ef8-86fc-49c43f3e3c71] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 7.004306217s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-031066 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (7.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-448954 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-448954 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-448954 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (18.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-031066 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-031066 --alsologtostderr -v=3: (18.129056266s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (18.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-951975 -n old-k8s-version-951975
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-951975 -n old-k8s-version-951975: exit status 7 (79.554932ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-951975 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (44.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-951975 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-951975 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (44.33900706s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-951975 -n old-k8s-version-951975
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (44.69s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (46.6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-986220 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-986220 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (46.599760058s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (46.60s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-031066 -n no-preload-031066
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-031066 -n no-preload-031066: exit status 7 (72.714236ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-031066 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (52.46s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-031066 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-031066 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (52.12297569s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-031066 -n no-preload-031066
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (52.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-880603 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [ef4e2065-8b84-4980-be88-6bfeded4c762] Pending
helpers_test.go:352: "busybox" [ef4e2065-8b84-4980-be88-6bfeded4c762] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [ef4e2065-8b84-4980-be88-6bfeded4c762] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.003077945s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-880603 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-qms7p" [55b6301a-677b-42eb-90f9-ff3b66ddb759] Running
E1018 09:16:49.585910    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/addons-757656/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003779752s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (16.91s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-880603 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-880603 --alsologtostderr -v=3: (16.905730702s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (16.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-qms7p" [55b6301a-677b-42eb-90f9-ff3b66ddb759] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004255149s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-951975 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-951975 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-986220 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [335a5ad4-0ec1-49da-9c93-b12fad5660a4] Pending
helpers_test.go:352: "busybox" [335a5ad4-0ec1-49da-9c93-b12fad5660a4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [335a5ad4-0ec1-49da-9c93-b12fad5660a4] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004688999s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-986220 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (29.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-444637 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-444637 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (29.524500463s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (29.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-880603 -n embed-certs-880603
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-880603 -n embed-certs-880603: exit status 7 (81.224616ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-880603 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (49.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-880603 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-880603 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (48.729479506s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-880603 -n embed-certs-880603
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (49.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (17.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-986220 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-986220 --alsologtostderr -v=3: (17.050640975s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (17.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-z9ksf" [05ed1364-ee9d-4f68-b87e-3310bf7a0d42] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00345071s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-z9ksf" [05ed1364-ee9d-4f68-b87e-3310bf7a0d42] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00371058s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-031066 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-031066 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-986220 -n default-k8s-diff-port-986220
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-986220 -n default-k8s-diff-port-986220: exit status 7 (86.903825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-986220 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (51.69s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-986220 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-986220 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (51.373616574s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-986220 -n default-k8s-diff-port-986220
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (51.69s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (13.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-444637 --alsologtostderr -v=3
E1018 09:17:41.485596    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/auto-448954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:17:41.492009    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/auto-448954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:17:41.503467    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/auto-448954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:17:41.524956    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/auto-448954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:17:41.566423    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/auto-448954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:17:41.647873    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/auto-448954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:17:41.809454    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/auto-448954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:17:42.130715    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/auto-448954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:17:42.772482    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/auto-448954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:17:44.054499    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/auto-448954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:17:44.097976    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/kindnet-448954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:17:44.104447    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/kindnet-448954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:17:44.115894    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/kindnet-448954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:17:44.137400    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/kindnet-448954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:17:44.178853    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/kindnet-448954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:17:44.260430    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/kindnet-448954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:17:44.422536    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/kindnet-448954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:17:44.744285    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/kindnet-448954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:17:45.291419    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/functional-897534/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:17:45.386103    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/kindnet-448954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:17:46.615890    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/auto-448954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:17:46.668387    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/kindnet-448954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:17:49.230591    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/kindnet-448954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:17:51.737574    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/auto-448954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-444637 --alsologtostderr -v=3: (13.252110895s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (13.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-444637 -n newest-cni-444637
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-444637 -n newest-cni-444637: exit status 7 (68.982597ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-444637 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1018 09:17:54.352748    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/kindnet-448954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (10.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-444637 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-444637 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (10.548969468s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-444637 -n newest-cni-444637
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (10.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-bdrc4" [edf6770a-0607-485d-8eef-aab09553ed76] Running
E1018 09:18:01.979208    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/auto-448954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:18:04.594601    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/kindnet-448954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003233997s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-444637 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-bdrc4" [edf6770a-0607-485d-8eef-aab09553ed76] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004469394s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-880603 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-880603 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-gwp9p" [c10f0845-9777-48ac-b709-3775518d787b] Running
E1018 09:18:22.461054    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/auto-448954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:18:25.076504    9394 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-5897/.minikube/profiles/kindnet-448954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004360635s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-gwp9p" [c10f0845-9777-48ac-b709-3775518d787b] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003465986s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-986220 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-986220 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    

Test skip (26/327)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-448954 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-448954

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-448954

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-448954

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-448954

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-448954

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-448954

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-448954

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-448954

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-448954

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-448954

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448954"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448954"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448954"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-448954

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448954"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448954"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-448954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-448954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-448954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-448954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-448954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-448954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-448954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-448954" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448954"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448954"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448954"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448954"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448954"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-448954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-448954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-448954" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448954"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448954"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448954"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448954"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448954"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-448954

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448954"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448954"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448954"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448954"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448954"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448954"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448954"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448954"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448954"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448954"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448954"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448954"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448954"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448954"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448954"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448954"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448954"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448954"

                                                
                                                
----------------------- debugLogs end: kubenet-448954 [took: 4.084133901s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-448954" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-448954
--- SKIP: TestNetworkPlugins/group/kubenet (4.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-448954 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-448954

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-448954

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-448954

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-448954

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-448954

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-448954

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-448954

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-448954

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-448954

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-448954

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448954"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448954"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448954"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-448954

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448954"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448954"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-448954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-448954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-448954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-448954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-448954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-448954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-448954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-448954" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448954"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448954"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448954"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448954"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448954"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-448954

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-448954

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-448954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-448954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-448954

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-448954

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-448954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-448954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-448954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-448954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-448954" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448954"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448954"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448954"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448954"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448954"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-448954

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448954"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448954"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448954"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448954"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448954"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448954"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448954"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448954"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448954"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448954"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448954"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448954"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448954"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448954"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448954"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448954"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448954"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-448954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448954"

                                                
                                                
----------------------- debugLogs end: cilium-448954 [took: 4.590591748s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-448954" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-448954
--- SKIP: TestNetworkPlugins/group/cilium (4.78s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-634520" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-634520
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
Copied to clipboard